Test Report: Docker_Linux_crio 22128

                    
                      2cb2c94398211ca18cf7c1877ff6bae2d6b3d16e:2025-12-13:42756
                    
                

Test fail (27/415)

x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-916029 addons disable volcano --alsologtostderr -v=1: exit status 11 (247.610858ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:31:03.052740   19407 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:31:03.053048   19407 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:03.053059   19407 out.go:374] Setting ErrFile to fd 2...
	I1213 08:31:03.053066   19407 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:03.053318   19407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:31:03.053601   19407 mustload.go:66] Loading cluster: addons-916029
	I1213 08:31:03.053970   19407 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:03.053993   19407 addons.go:622] checking whether the cluster is paused
	I1213 08:31:03.054090   19407 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:03.054106   19407 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:31:03.054502   19407 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:31:03.073611   19407 ssh_runner.go:195] Run: systemctl --version
	I1213 08:31:03.073676   19407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:31:03.090253   19407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:31:03.185768   19407 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:31:03.185857   19407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:31:03.213872   19407 cri.go:89] found id: "4bb25dd81c0541c58b998a5dd1e1d2fe7a443642ac20fb592394af3d836737d4"
	I1213 08:31:03.213918   19407 cri.go:89] found id: "698d7bca1539fb40aff53c0c6284f7f24e87951c737357b6c6498f4a0c2ccad9"
	I1213 08:31:03.213922   19407 cri.go:89] found id: "e4fc393aefc08283b1783fbbea7888ea26a275bfe57c0e05cd0d336b6ed4966e"
	I1213 08:31:03.213925   19407 cri.go:89] found id: "b4d11fbe011a5a6679dbdec4c460349ab7c99e5017e8334bb7893843af12fd12"
	I1213 08:31:03.213928   19407 cri.go:89] found id: "e74c5d67f2c68576df4d7df9e93a1012a053c238c27d92ad2331b9fa54a041cc"
	I1213 08:31:03.213932   19407 cri.go:89] found id: "9af098f389792a2ca60bfe746b36b7037affa3720d9bb625614e600d0b53f777"
	I1213 08:31:03.213935   19407 cri.go:89] found id: "64bbaac2488ef0e0b7e28fac6e2f2a932fb36905093e3287b2b5d0e89e837414"
	I1213 08:31:03.213938   19407 cri.go:89] found id: "8c199d50a7ce02f37c9f7ca0c310d99d98be3926d103644855fa912e30d253c8"
	I1213 08:31:03.213941   19407 cri.go:89] found id: "fde0db8ec43cfda390edee03eb437eb6a2b1d282c69405847fc4c41892df3581"
	I1213 08:31:03.213947   19407 cri.go:89] found id: "fe444a659a1b400f330d2f2a1de1e6b38fc3f00a0515e54e7d432c4581d6f457"
	I1213 08:31:03.213950   19407 cri.go:89] found id: "fa1c6645c83e0942a6fa2b033162596e626df4025e61894457e512273a89358b"
	I1213 08:31:03.213954   19407 cri.go:89] found id: "ad874af797e833520e1981601d5efeefcf313d735f57b7308ad40bb2d8ccb664"
	I1213 08:31:03.213958   19407 cri.go:89] found id: "7ca9d8ef0232224f51f796a5a42b289083e38e83786d57e118ca410ae35b9a12"
	I1213 08:31:03.213962   19407 cri.go:89] found id: "2932c3775c90ead574169f47ba6b14c61f79b3b703cc7d808599349d052c6874"
	I1213 08:31:03.213967   19407 cri.go:89] found id: "66396849dd6c6c91e526433d374f5d0d1144295399e1fad4c6d195ee2f8c97dd"
	I1213 08:31:03.213975   19407 cri.go:89] found id: "3c9c834fe19b9b803891c5f93368f927b1f2634d843c5c238f3ff7eafcc9af21"
	I1213 08:31:03.213983   19407 cri.go:89] found id: "fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376"
	I1213 08:31:03.213989   19407 cri.go:89] found id: "f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c"
	I1213 08:31:03.213993   19407 cri.go:89] found id: "dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535"
	I1213 08:31:03.213998   19407 cri.go:89] found id: "ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d"
	I1213 08:31:03.214003   19407 cri.go:89] found id: "8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0"
	I1213 08:31:03.214007   19407 cri.go:89] found id: "1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955"
	I1213 08:31:03.214010   19407 cri.go:89] found id: "ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0"
	I1213 08:31:03.214013   19407 cri.go:89] found id: "ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90"
	I1213 08:31:03.214015   19407 cri.go:89] found id: ""
	I1213 08:31:03.214055   19407 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 08:31:03.228290   19407 out.go:203] 
	W1213 08:31:03.229528   19407 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 08:31:03.229548   19407 out.go:285] * 
	* 
	W1213 08:31:03.232462   19407 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:31:03.233751   19407 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-916029 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 2.743553ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-xvfhz" [0232b267-4d81-470a-80c9-4f84718b005f] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002839729s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-cd6hw" [2ed47d8d-3ce8-4770-8877-300d97e12e3a] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002800525s
addons_test.go:394: (dbg) Run:  kubectl --context addons-916029 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-916029 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-916029 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.023665475s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 ip
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-916029 addons disable registry --alsologtostderr -v=1: exit status 11 (271.690645ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:31:25.352126   21604 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:31:25.352256   21604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:25.352266   21604 out.go:374] Setting ErrFile to fd 2...
	I1213 08:31:25.352270   21604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:25.352470   21604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:31:25.352741   21604 mustload.go:66] Loading cluster: addons-916029
	I1213 08:31:25.353118   21604 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:25.353140   21604 addons.go:622] checking whether the cluster is paused
	I1213 08:31:25.353230   21604 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:25.353243   21604 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:31:25.353652   21604 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:31:25.374742   21604 ssh_runner.go:195] Run: systemctl --version
	I1213 08:31:25.374816   21604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:31:25.396143   21604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:31:25.494039   21604 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:31:25.494118   21604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:31:25.527780   21604 cri.go:89] found id: "4bb25dd81c0541c58b998a5dd1e1d2fe7a443642ac20fb592394af3d836737d4"
	I1213 08:31:25.527806   21604 cri.go:89] found id: "698d7bca1539fb40aff53c0c6284f7f24e87951c737357b6c6498f4a0c2ccad9"
	I1213 08:31:25.527812   21604 cri.go:89] found id: "e4fc393aefc08283b1783fbbea7888ea26a275bfe57c0e05cd0d336b6ed4966e"
	I1213 08:31:25.527817   21604 cri.go:89] found id: "b4d11fbe011a5a6679dbdec4c460349ab7c99e5017e8334bb7893843af12fd12"
	I1213 08:31:25.527821   21604 cri.go:89] found id: "e74c5d67f2c68576df4d7df9e93a1012a053c238c27d92ad2331b9fa54a041cc"
	I1213 08:31:25.527827   21604 cri.go:89] found id: "9af098f389792a2ca60bfe746b36b7037affa3720d9bb625614e600d0b53f777"
	I1213 08:31:25.527831   21604 cri.go:89] found id: "64bbaac2488ef0e0b7e28fac6e2f2a932fb36905093e3287b2b5d0e89e837414"
	I1213 08:31:25.527837   21604 cri.go:89] found id: "8c199d50a7ce02f37c9f7ca0c310d99d98be3926d103644855fa912e30d253c8"
	I1213 08:31:25.527842   21604 cri.go:89] found id: "fde0db8ec43cfda390edee03eb437eb6a2b1d282c69405847fc4c41892df3581"
	I1213 08:31:25.527857   21604 cri.go:89] found id: "fe444a659a1b400f330d2f2a1de1e6b38fc3f00a0515e54e7d432c4581d6f457"
	I1213 08:31:25.527864   21604 cri.go:89] found id: "fa1c6645c83e0942a6fa2b033162596e626df4025e61894457e512273a89358b"
	I1213 08:31:25.527868   21604 cri.go:89] found id: "ad874af797e833520e1981601d5efeefcf313d735f57b7308ad40bb2d8ccb664"
	I1213 08:31:25.527880   21604 cri.go:89] found id: "7ca9d8ef0232224f51f796a5a42b289083e38e83786d57e118ca410ae35b9a12"
	I1213 08:31:25.527884   21604 cri.go:89] found id: "2932c3775c90ead574169f47ba6b14c61f79b3b703cc7d808599349d052c6874"
	I1213 08:31:25.527888   21604 cri.go:89] found id: "66396849dd6c6c91e526433d374f5d0d1144295399e1fad4c6d195ee2f8c97dd"
	I1213 08:31:25.527895   21604 cri.go:89] found id: "3c9c834fe19b9b803891c5f93368f927b1f2634d843c5c238f3ff7eafcc9af21"
	I1213 08:31:25.527903   21604 cri.go:89] found id: "fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376"
	I1213 08:31:25.527909   21604 cri.go:89] found id: "f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c"
	I1213 08:31:25.527913   21604 cri.go:89] found id: "dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535"
	I1213 08:31:25.527917   21604 cri.go:89] found id: "ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d"
	I1213 08:31:25.527936   21604 cri.go:89] found id: "8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0"
	I1213 08:31:25.527941   21604 cri.go:89] found id: "1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955"
	I1213 08:31:25.527945   21604 cri.go:89] found id: "ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0"
	I1213 08:31:25.527956   21604 cri.go:89] found id: "ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90"
	I1213 08:31:25.527959   21604 cri.go:89] found id: ""
	I1213 08:31:25.528008   21604 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 08:31:25.545415   21604 out.go:203] 
	W1213 08:31:25.546779   21604 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 08:31:25.546812   21604 out.go:285] * 
	* 
	W1213 08:31:25.552125   21604 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:31:25.553535   21604 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-916029 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.53s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.810039ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-916029
addons_test.go:334: (dbg) Run:  kubectl --context addons-916029 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-916029 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (239.402924ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:31:25.790961   22028 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:31:25.791229   22028 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:25.791239   22028 out.go:374] Setting ErrFile to fd 2...
	I1213 08:31:25.791243   22028 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:25.791450   22028 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:31:25.791725   22028 mustload.go:66] Loading cluster: addons-916029
	I1213 08:31:25.792034   22028 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:25.792052   22028 addons.go:622] checking whether the cluster is paused
	I1213 08:31:25.792130   22028 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:25.792142   22028 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:31:25.792539   22028 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:31:25.810402   22028 ssh_runner.go:195] Run: systemctl --version
	I1213 08:31:25.810456   22028 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:31:25.826997   22028 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:31:25.922606   22028 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:31:25.922694   22028 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:31:25.950825   22028 cri.go:89] found id: "4bb25dd81c0541c58b998a5dd1e1d2fe7a443642ac20fb592394af3d836737d4"
	I1213 08:31:25.950873   22028 cri.go:89] found id: "698d7bca1539fb40aff53c0c6284f7f24e87951c737357b6c6498f4a0c2ccad9"
	I1213 08:31:25.950879   22028 cri.go:89] found id: "e4fc393aefc08283b1783fbbea7888ea26a275bfe57c0e05cd0d336b6ed4966e"
	I1213 08:31:25.950884   22028 cri.go:89] found id: "b4d11fbe011a5a6679dbdec4c460349ab7c99e5017e8334bb7893843af12fd12"
	I1213 08:31:25.950888   22028 cri.go:89] found id: "e74c5d67f2c68576df4d7df9e93a1012a053c238c27d92ad2331b9fa54a041cc"
	I1213 08:31:25.950899   22028 cri.go:89] found id: "9af098f389792a2ca60bfe746b36b7037affa3720d9bb625614e600d0b53f777"
	I1213 08:31:25.950904   22028 cri.go:89] found id: "64bbaac2488ef0e0b7e28fac6e2f2a932fb36905093e3287b2b5d0e89e837414"
	I1213 08:31:25.950909   22028 cri.go:89] found id: "8c199d50a7ce02f37c9f7ca0c310d99d98be3926d103644855fa912e30d253c8"
	I1213 08:31:25.950914   22028 cri.go:89] found id: "fde0db8ec43cfda390edee03eb437eb6a2b1d282c69405847fc4c41892df3581"
	I1213 08:31:25.950927   22028 cri.go:89] found id: "fe444a659a1b400f330d2f2a1de1e6b38fc3f00a0515e54e7d432c4581d6f457"
	I1213 08:31:25.950935   22028 cri.go:89] found id: "fa1c6645c83e0942a6fa2b033162596e626df4025e61894457e512273a89358b"
	I1213 08:31:25.950940   22028 cri.go:89] found id: "ad874af797e833520e1981601d5efeefcf313d735f57b7308ad40bb2d8ccb664"
	I1213 08:31:25.950947   22028 cri.go:89] found id: "7ca9d8ef0232224f51f796a5a42b289083e38e83786d57e118ca410ae35b9a12"
	I1213 08:31:25.950952   22028 cri.go:89] found id: "2932c3775c90ead574169f47ba6b14c61f79b3b703cc7d808599349d052c6874"
	I1213 08:31:25.950959   22028 cri.go:89] found id: "66396849dd6c6c91e526433d374f5d0d1144295399e1fad4c6d195ee2f8c97dd"
	I1213 08:31:25.950968   22028 cri.go:89] found id: "3c9c834fe19b9b803891c5f93368f927b1f2634d843c5c238f3ff7eafcc9af21"
	I1213 08:31:25.950976   22028 cri.go:89] found id: "fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376"
	I1213 08:31:25.950982   22028 cri.go:89] found id: "f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c"
	I1213 08:31:25.950987   22028 cri.go:89] found id: "dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535"
	I1213 08:31:25.950991   22028 cri.go:89] found id: "ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d"
	I1213 08:31:25.950996   22028 cri.go:89] found id: "8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0"
	I1213 08:31:25.951000   22028 cri.go:89] found id: "1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955"
	I1213 08:31:25.951005   22028 cri.go:89] found id: "ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0"
	I1213 08:31:25.951012   22028 cri.go:89] found id: "ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90"
	I1213 08:31:25.951016   22028 cri.go:89] found id: ""
	I1213 08:31:25.951073   22028 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 08:31:25.965463   22028 out.go:203] 
	W1213 08:31:25.966622   22028 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 08:31:25.966642   22028 out.go:285] * 
	* 
	W1213 08:31:25.969519   22028 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:31:25.970772   22028 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-916029 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-916029 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-916029 replace --force -f testdata/nginx-ingress-v1.yaml
2025/12/13 08:31:25 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:249: (dbg) Run:  kubectl --context addons-916029 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [684f6463-9191-40cb-bf9b-2f0896943323] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [684f6463-9191-40cb-bf9b-2f0896943323] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003437685s
I1213 08:31:34.590525    9303 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-916029 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.186755188s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-916029 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-916029
helpers_test.go:244: (dbg) docker inspect addons-916029:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3894a43e7e24d1bd07160a75f88b7ab24966bb4b3ec32318651593cd1af3a1a4",
	        "Created": "2025-12-13T08:29:21.980066347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11724,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:29:22.016232535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/3894a43e7e24d1bd07160a75f88b7ab24966bb4b3ec32318651593cd1af3a1a4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3894a43e7e24d1bd07160a75f88b7ab24966bb4b3ec32318651593cd1af3a1a4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3894a43e7e24d1bd07160a75f88b7ab24966bb4b3ec32318651593cd1af3a1a4/hosts",
	        "LogPath": "/var/lib/docker/containers/3894a43e7e24d1bd07160a75f88b7ab24966bb4b3ec32318651593cd1af3a1a4/3894a43e7e24d1bd07160a75f88b7ab24966bb4b3ec32318651593cd1af3a1a4-json.log",
	        "Name": "/addons-916029",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-916029:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-916029",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3894a43e7e24d1bd07160a75f88b7ab24966bb4b3ec32318651593cd1af3a1a4",
	                "LowerDir": "/var/lib/docker/overlay2/f905af889a0ffe5ffdfab92efe16906820ec25e53d85e0b60d622e2f3b35f5fe-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f905af889a0ffe5ffdfab92efe16906820ec25e53d85e0b60d622e2f3b35f5fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f905af889a0ffe5ffdfab92efe16906820ec25e53d85e0b60d622e2f3b35f5fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f905af889a0ffe5ffdfab92efe16906820ec25e53d85e0b60d622e2f3b35f5fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-916029",
	                "Source": "/var/lib/docker/volumes/addons-916029/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-916029",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-916029",
	                "name.minikube.sigs.k8s.io": "addons-916029",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dfb20378f22610eca91961f905378322aa67dfb22ae2b60c2f95b1e54a778df6",
	            "SandboxKey": "/var/run/docker/netns/dfb20378f226",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-916029": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1065423f0bf7f549d79255e3aec14adf8b6fc7a290fbc66b4874cee25a2f6f5d",
	                    "EndpointID": "e5f4d7fa8f3ae81408c4edfd4cc14ebb22920d7f52578a9880672c344b1c340a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "5e:4c:ec:08:97:5a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-916029",
	                        "3894a43e7e24"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-916029 -n addons-916029
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-916029 logs -n 25: (1.108106052s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-949734 --alsologtostderr --binary-mirror http://127.0.0.1:46283 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-949734 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ delete  │ -p binary-mirror-949734                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-949734 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ addons  │ disable dashboard -p addons-916029                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ addons  │ enable dashboard -p addons-916029                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ start   │ -p addons-916029 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:31 UTC │
	│ addons  │ addons-916029 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ addons  │ addons-916029 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-916029 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ addons  │ addons-916029 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ addons  │ addons-916029 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ ssh     │ addons-916029 ssh cat /opt/local-path-provisioner/pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │ 13 Dec 25 08:31 UTC │
	│ addons  │ addons-916029 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ addons  │ addons-916029 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ addons  │ addons-916029 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ addons  │ addons-916029 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ ip      │ addons-916029 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │ 13 Dec 25 08:31 UTC │
	│ addons  │ addons-916029 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ addons  │ addons-916029 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-916029                                                                                                                                                                                                                                                                                                                                                                                           │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │ 13 Dec 25 08:31 UTC │
	│ addons  │ addons-916029 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ addons  │ addons-916029 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ ssh     │ addons-916029 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ addons  │ addons-916029 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ addons  │ addons-916029 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ ip      │ addons-916029 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-916029        │ jenkins │ v1.37.0 │ 13 Dec 25 08:33 UTC │ 13 Dec 25 08:33 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:28:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:28:58.330896   11062 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:28:58.331164   11062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:28:58.331175   11062 out.go:374] Setting ErrFile to fd 2...
	I1213 08:28:58.331182   11062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:28:58.331413   11062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:28:58.331969   11062 out.go:368] Setting JSON to false
	I1213 08:28:58.332774   11062 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":690,"bootTime":1765613848,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:28:58.332829   11062 start.go:143] virtualization: kvm guest
	I1213 08:28:58.334662   11062 out.go:179] * [addons-916029] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 08:28:58.336362   11062 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:28:58.336361   11062 notify.go:221] Checking for updates...
	I1213 08:28:58.339105   11062 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:28:58.340301   11062 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 08:28:58.341500   11062 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 08:28:58.342877   11062 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 08:28:58.344152   11062 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:28:58.345544   11062 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:28:58.368897   11062 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 08:28:58.369007   11062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:28:58.423115   11062 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-13 08:28:58.413568723 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 08:28:58.423215   11062 docker.go:319] overlay module found
	I1213 08:28:58.424959   11062 out.go:179] * Using the docker driver based on user configuration
	I1213 08:28:58.426230   11062 start.go:309] selected driver: docker
	I1213 08:28:58.426253   11062 start.go:927] validating driver "docker" against <nil>
	I1213 08:28:58.426265   11062 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:28:58.426855   11062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:28:58.478231   11062 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-13 08:28:58.46930263 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 08:28:58.478383   11062 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:28:58.478621   11062 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 08:28:58.480230   11062 out.go:179] * Using Docker driver with root privileges
	I1213 08:28:58.481347   11062 cni.go:84] Creating CNI manager for ""
	I1213 08:28:58.481407   11062 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 08:28:58.481420   11062 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 08:28:58.481475   11062 start.go:353] cluster config:
	{Name:addons-916029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-916029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1213 08:28:58.482751   11062 out.go:179] * Starting "addons-916029" primary control-plane node in "addons-916029" cluster
	I1213 08:28:58.483829   11062 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 08:28:58.484902   11062 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:28:58.486030   11062 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 08:28:58.486065   11062 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 08:28:58.486075   11062 cache.go:65] Caching tarball of preloaded images
	I1213 08:28:58.486131   11062 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:28:58.486170   11062 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 08:28:58.486185   11062 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 08:28:58.486629   11062 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/config.json ...
	I1213 08:28:58.486660   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/config.json: {Name:mke73757f22c2faac14c0204a0d0625a7a26d76a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:28:58.502430   11062 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 08:28:58.502581   11062 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 08:28:58.502600   11062 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1213 08:28:58.502604   11062 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1213 08:28:58.502612   11062 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1213 08:28:58.502619   11062 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	I1213 08:29:11.164096   11062 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1213 08:29:11.164135   11062 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:29:11.164197   11062 start.go:360] acquireMachinesLock for addons-916029: {Name:mk5895ba534e61d0049c0be22d884e3317bb56b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:29:11.164296   11062 start.go:364] duration metric: took 76.351µs to acquireMachinesLock for "addons-916029"
	I1213 08:29:11.164320   11062 start.go:93] Provisioning new machine with config: &{Name:addons-916029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-916029 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 08:29:11.164389   11062 start.go:125] createHost starting for "" (driver="docker")
	I1213 08:29:11.166113   11062 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1213 08:29:11.166316   11062 start.go:159] libmachine.API.Create for "addons-916029" (driver="docker")
	I1213 08:29:11.166344   11062 client.go:173] LocalClient.Create starting
	I1213 08:29:11.166419   11062 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem
	I1213 08:29:11.394456   11062 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem
	I1213 08:29:11.436358   11062 cli_runner.go:164] Run: docker network inspect addons-916029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 08:29:11.454587   11062 cli_runner.go:211] docker network inspect addons-916029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 08:29:11.454672   11062 network_create.go:284] running [docker network inspect addons-916029] to gather additional debugging logs...
	I1213 08:29:11.454694   11062 cli_runner.go:164] Run: docker network inspect addons-916029
	W1213 08:29:11.470235   11062 cli_runner.go:211] docker network inspect addons-916029 returned with exit code 1
	I1213 08:29:11.470261   11062 network_create.go:287] error running [docker network inspect addons-916029]: docker network inspect addons-916029: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-916029 not found
	I1213 08:29:11.470280   11062 network_create.go:289] output of [docker network inspect addons-916029]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-916029 not found
	
	** /stderr **
	I1213 08:29:11.470374   11062 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 08:29:11.487838   11062 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00170aa20}
	I1213 08:29:11.487886   11062 network_create.go:124] attempt to create docker network addons-916029 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 08:29:11.487935   11062 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-916029 addons-916029
	I1213 08:29:11.535152   11062 network_create.go:108] docker network addons-916029 192.168.49.0/24 created
	I1213 08:29:11.535184   11062 kic.go:121] calculated static IP "192.168.49.2" for the "addons-916029" container
	I1213 08:29:11.535243   11062 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 08:29:11.551306   11062 cli_runner.go:164] Run: docker volume create addons-916029 --label name.minikube.sigs.k8s.io=addons-916029 --label created_by.minikube.sigs.k8s.io=true
	I1213 08:29:11.568450   11062 oci.go:103] Successfully created a docker volume addons-916029
	I1213 08:29:11.568535   11062 cli_runner.go:164] Run: docker run --rm --name addons-916029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-916029 --entrypoint /usr/bin/test -v addons-916029:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 08:29:18.123717   11062 cli_runner.go:217] Completed: docker run --rm --name addons-916029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-916029 --entrypoint /usr/bin/test -v addons-916029:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (6.555137766s)
	I1213 08:29:18.123748   11062 oci.go:107] Successfully prepared a docker volume addons-916029
	I1213 08:29:18.123775   11062 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 08:29:18.123786   11062 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 08:29:18.123846   11062 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-916029:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 08:29:21.909269   11062 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-916029:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.785383863s)
	I1213 08:29:21.909298   11062 kic.go:203] duration metric: took 3.785511007s to extract preloaded images to volume ...
	W1213 08:29:21.909388   11062 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1213 08:29:21.909416   11062 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1213 08:29:21.909453   11062 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 08:29:21.963282   11062 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-916029 --name addons-916029 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-916029 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-916029 --network addons-916029 --ip 192.168.49.2 --volume addons-916029:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 08:29:22.265745   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Running}}
	I1213 08:29:22.284889   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:22.304303   11062 cli_runner.go:164] Run: docker exec addons-916029 stat /var/lib/dpkg/alternatives/iptables
	I1213 08:29:22.354219   11062 oci.go:144] the created container "addons-916029" has a running status.
	I1213 08:29:22.354254   11062 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa...
	I1213 08:29:22.394987   11062 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 08:29:22.428238   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:22.446724   11062 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 08:29:22.446746   11062 kic_runner.go:114] Args: [docker exec --privileged addons-916029 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 08:29:22.488608   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:22.508918   11062 machine.go:94] provisionDockerMachine start ...
	I1213 08:29:22.509023   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:22.533176   11062 main.go:143] libmachine: Using SSH client type: native
	I1213 08:29:22.533417   11062 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 08:29:22.533429   11062 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:29:22.534778   11062 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57410->127.0.0.1:32768: read: connection reset by peer
	I1213 08:29:25.668258   11062 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-916029
	
	I1213 08:29:25.668288   11062 ubuntu.go:182] provisioning hostname "addons-916029"
	I1213 08:29:25.668356   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:25.685794   11062 main.go:143] libmachine: Using SSH client type: native
	I1213 08:29:25.686090   11062 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 08:29:25.686104   11062 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-916029 && echo "addons-916029" | sudo tee /etc/hostname
	I1213 08:29:25.824765   11062 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-916029
	
	I1213 08:29:25.824845   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:25.842753   11062 main.go:143] libmachine: Using SSH client type: native
	I1213 08:29:25.842984   11062 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 08:29:25.843000   11062 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-916029' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-916029/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-916029' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:29:25.973973   11062 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:29:25.973999   11062 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-5776/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-5776/.minikube}
	I1213 08:29:25.974036   11062 ubuntu.go:190] setting up certificates
	I1213 08:29:25.974048   11062 provision.go:84] configureAuth start
	I1213 08:29:25.974106   11062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-916029
	I1213 08:29:25.991051   11062 provision.go:143] copyHostCerts
	I1213 08:29:25.991125   11062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem (1082 bytes)
	I1213 08:29:25.991243   11062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem (1123 bytes)
	I1213 08:29:25.991311   11062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem (1675 bytes)
	I1213 08:29:25.991363   11062 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem org=jenkins.addons-916029 san=[127.0.0.1 192.168.49.2 addons-916029 localhost minikube]
	I1213 08:29:26.068458   11062 provision.go:177] copyRemoteCerts
	I1213 08:29:26.068525   11062 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:29:26.068563   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:26.086015   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:26.181354   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:29:26.199612   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 08:29:26.216263   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 08:29:26.232776   11062 provision.go:87] duration metric: took 258.709066ms to configureAuth
	I1213 08:29:26.232801   11062 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:29:26.232972   11062 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:29:26.233080   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:26.250391   11062 main.go:143] libmachine: Using SSH client type: native
	I1213 08:29:26.250674   11062 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 08:29:26.250699   11062 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 08:29:26.512040   11062 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 08:29:26.512066   11062 machine.go:97] duration metric: took 4.003123158s to provisionDockerMachine
	I1213 08:29:26.512076   11062 client.go:176] duration metric: took 15.345726105s to LocalClient.Create
	I1213 08:29:26.512091   11062 start.go:167] duration metric: took 15.345777412s to libmachine.API.Create "addons-916029"
	I1213 08:29:26.512125   11062 start.go:293] postStartSetup for "addons-916029" (driver="docker")
	I1213 08:29:26.512138   11062 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:29:26.512194   11062 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:29:26.512241   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:26.528948   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:26.625750   11062 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:29:26.629067   11062 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:29:26.629091   11062 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:29:26.629101   11062 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/addons for local assets ...
	I1213 08:29:26.629151   11062 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/files for local assets ...
	I1213 08:29:26.629188   11062 start.go:296] duration metric: took 117.055364ms for postStartSetup
	I1213 08:29:26.629476   11062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-916029
	I1213 08:29:26.647571   11062 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/config.json ...
	I1213 08:29:26.647817   11062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:29:26.647857   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:26.664479   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:26.756306   11062 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:29:26.760661   11062 start.go:128] duration metric: took 15.596257379s to createHost
	I1213 08:29:26.760688   11062 start.go:83] releasing machines lock for "addons-916029", held for 15.596379109s
	I1213 08:29:26.760754   11062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-916029
	I1213 08:29:26.777424   11062 ssh_runner.go:195] Run: cat /version.json
	I1213 08:29:26.777472   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:26.777563   11062 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 08:29:26.777619   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:26.795512   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:26.795836   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:26.940192   11062 ssh_runner.go:195] Run: systemctl --version
	I1213 08:29:26.946399   11062 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 08:29:26.980496   11062 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 08:29:26.985031   11062 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:29:26.985089   11062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:29:27.009777   11062 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 08:29:27.009803   11062 start.go:496] detecting cgroup driver to use...
	I1213 08:29:27.009837   11062 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 08:29:27.009894   11062 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 08:29:27.025559   11062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 08:29:27.037360   11062 docker.go:218] disabling cri-docker service (if available) ...
	I1213 08:29:27.037404   11062 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 08:29:27.052811   11062 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 08:29:27.069049   11062 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 08:29:27.150572   11062 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 08:29:27.235023   11062 docker.go:234] disabling docker service ...
	I1213 08:29:27.235081   11062 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 08:29:27.252651   11062 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 08:29:27.264776   11062 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 08:29:27.346939   11062 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 08:29:27.427152   11062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:29:27.439351   11062 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:29:27.452889   11062 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 08:29:27.452946   11062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:29:27.462642   11062 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 08:29:27.462697   11062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:29:27.471119   11062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:29:27.479140   11062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:29:27.487683   11062 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:29:27.496178   11062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:29:27.504408   11062 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:29:27.517135   11062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:29:27.525066   11062 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:29:27.531605   11062 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 08:29:27.531659   11062 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 08:29:27.543556   11062 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:29:27.550600   11062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:29:27.628204   11062 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 08:29:27.753146   11062 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 08:29:27.753235   11062 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 08:29:27.757062   11062 start.go:564] Will wait 60s for crictl version
	I1213 08:29:27.757109   11062 ssh_runner.go:195] Run: which crictl
	I1213 08:29:27.760424   11062 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:29:27.784923   11062 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 08:29:27.785048   11062 ssh_runner.go:195] Run: crio --version
	I1213 08:29:27.812026   11062 ssh_runner.go:195] Run: crio --version
	I1213 08:29:27.840266   11062 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 08:29:27.841604   11062 cli_runner.go:164] Run: docker network inspect addons-916029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 08:29:27.858217   11062 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 08:29:27.862041   11062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 08:29:27.871907   11062 kubeadm.go:884] updating cluster {Name:addons-916029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-916029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:29:27.872021   11062 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 08:29:27.872068   11062 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:29:27.902679   11062 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 08:29:27.902700   11062 crio.go:433] Images already preloaded, skipping extraction
	I1213 08:29:27.902751   11062 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:29:27.926328   11062 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 08:29:27.926348   11062 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:29:27.926355   11062 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1213 08:29:27.926436   11062 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-916029 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-916029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:29:27.926541   11062 ssh_runner.go:195] Run: crio config
	I1213 08:29:27.970733   11062 cni.go:84] Creating CNI manager for ""
	I1213 08:29:27.970765   11062 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 08:29:27.970784   11062 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:29:27.970812   11062 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-916029 NodeName:addons-916029 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:29:27.970946   11062 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-916029"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:29:27.971025   11062 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 08:29:27.978864   11062 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:29:27.978928   11062 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:29:27.986099   11062 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 08:29:27.998099   11062 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 08:29:28.012519   11062 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1213 08:29:28.024069   11062 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:29:28.027246   11062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 08:29:28.037006   11062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:29:28.119015   11062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:29:28.147756   11062 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029 for IP: 192.168.49.2
	I1213 08:29:28.147778   11062 certs.go:195] generating shared ca certs ...
	I1213 08:29:28.147797   11062 certs.go:227] acquiring lock for ca certs: {Name:mk80892d6e61f0acf7b97550b52b3341a040901c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.147958   11062 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key
	I1213 08:29:28.264036   11062 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt ...
	I1213 08:29:28.264064   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt: {Name:mk6f4ee1daf6a670a71cd3dd080f8993bfdb577b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.264227   11062 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key ...
	I1213 08:29:28.264239   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key: {Name:mka38e30a6f036b3c2f294b94ec42c8b0adf6ee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.264316   11062 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key
	I1213 08:29:28.286058   11062 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt ...
	I1213 08:29:28.286079   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt: {Name:mk8ae9b8d202e16240cd2b000add4644ae6c0413 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.286193   11062 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key ...
	I1213 08:29:28.286204   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key: {Name:mkde16481c205129da63b86fd12ee04100fe81c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.286269   11062 certs.go:257] generating profile certs ...
	I1213 08:29:28.286319   11062 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.key
	I1213 08:29:28.286331   11062 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt with IP's: []
	I1213 08:29:28.313252   11062 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt ...
	I1213 08:29:28.313276   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: {Name:mk05226f9e9a0d94a2b200dc42c1ed79ca290688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.313422   11062 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.key ...
	I1213 08:29:28.313433   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.key: {Name:mk6d12a9d6d1bffb939c2c87b1972932835eef08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.313522   11062 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.key.b8a5591c
	I1213 08:29:28.313540   11062 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.crt.b8a5591c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 08:29:28.352215   11062 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.crt.b8a5591c ...
	I1213 08:29:28.352238   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.crt.b8a5591c: {Name:mkf26e5a215376a6d773f3e86a1c5513d5049010 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.352383   11062 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.key.b8a5591c ...
	I1213 08:29:28.352396   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.key.b8a5591c: {Name:mk6aff181408a3e98efe2b7ea62b1cca2d842d58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.352464   11062 certs.go:382] copying /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.crt.b8a5591c -> /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.crt
	I1213 08:29:28.352569   11062 certs.go:386] copying /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.key.b8a5591c -> /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.key
	I1213 08:29:28.352624   11062 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/proxy-client.key
	I1213 08:29:28.352641   11062 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/proxy-client.crt with IP's: []
	I1213 08:29:28.398368   11062 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/proxy-client.crt ...
	I1213 08:29:28.398401   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/proxy-client.crt: {Name:mk8a3e4f27724468ad6a80836fa15030ca4ea359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.398568   11062 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/proxy-client.key ...
	I1213 08:29:28.398578   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/proxy-client.key: {Name:mk554f31534e58c6c00827c652100f20a150e212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.398756   11062 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 08:29:28.398795   11062 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem (1082 bytes)
	I1213 08:29:28.398820   11062 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem (1123 bytes)
	I1213 08:29:28.398842   11062 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem (1675 bytes)
	I1213 08:29:28.399482   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:29:28.417339   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:29:28.434480   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:29:28.451271   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 08:29:28.468261   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 08:29:28.484741   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 08:29:28.500854   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:29:28.517259   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:29:28.534219   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:29:28.552912   11062 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:29:28.565367   11062 ssh_runner.go:195] Run: openssl version
	I1213 08:29:28.571232   11062 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:29:28.578245   11062 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:29:28.588169   11062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:29:28.591928   11062 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:29:28.591992   11062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:29:28.625154   11062 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:29:28.632342   11062 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 08:29:28.639495   11062 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:29:28.643031   11062 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 08:29:28.643079   11062 kubeadm.go:401] StartCluster: {Name:addons-916029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-916029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:29:28.643153   11062 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:29:28.643204   11062 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:29:28.668446   11062 cri.go:89] found id: ""
	I1213 08:29:28.668542   11062 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:29:28.676168   11062 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 08:29:28.683673   11062 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 08:29:28.683718   11062 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 08:29:28.691421   11062 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 08:29:28.691436   11062 kubeadm.go:158] found existing configuration files:
	
	I1213 08:29:28.691471   11062 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 08:29:28.698608   11062 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 08:29:28.698654   11062 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 08:29:28.705436   11062 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 08:29:28.712434   11062 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 08:29:28.712481   11062 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 08:29:28.719122   11062 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 08:29:28.725986   11062 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 08:29:28.726028   11062 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 08:29:28.732675   11062 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 08:29:28.739730   11062 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 08:29:28.739774   11062 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 08:29:28.747045   11062 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 08:29:28.781516   11062 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 08:29:28.781570   11062 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 08:29:28.812595   11062 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 08:29:28.812670   11062 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1213 08:29:28.812715   11062 kubeadm.go:319] OS: Linux
	I1213 08:29:28.812767   11062 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 08:29:28.812818   11062 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 08:29:28.812887   11062 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 08:29:28.812930   11062 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 08:29:28.812971   11062 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 08:29:28.813023   11062 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 08:29:28.813065   11062 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 08:29:28.813147   11062 kubeadm.go:319] CGROUPS_IO: enabled
	I1213 08:29:28.869856   11062 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 08:29:28.870049   11062 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 08:29:28.870185   11062 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 08:29:28.876369   11062 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 08:29:28.878372   11062 out.go:252]   - Generating certificates and keys ...
	I1213 08:29:28.878469   11062 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 08:29:28.878571   11062 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 08:29:28.981268   11062 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 08:29:29.118074   11062 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 08:29:29.167474   11062 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 08:29:29.274514   11062 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 08:29:29.344379   11062 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 08:29:29.344554   11062 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-916029 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 08:29:29.470053   11062 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 08:29:29.470234   11062 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-916029 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 08:29:29.633396   11062 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 08:29:29.818086   11062 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 08:29:30.123327   11062 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 08:29:30.123425   11062 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 08:29:30.541521   11062 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 08:29:30.689228   11062 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 08:29:30.771292   11062 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 08:29:30.840360   11062 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 08:29:31.354737   11062 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 08:29:31.356324   11062 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 08:29:31.360141   11062 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 08:29:31.361866   11062 out.go:252]   - Booting up control plane ...
	I1213 08:29:31.361966   11062 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 08:29:31.362049   11062 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 08:29:31.362452   11062 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 08:29:31.375837   11062 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 08:29:31.375953   11062 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 08:29:31.382435   11062 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 08:29:31.382661   11062 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 08:29:31.382708   11062 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 08:29:31.477369   11062 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 08:29:31.477562   11062 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 08:29:31.978671   11062 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.388074ms
	I1213 08:29:31.982412   11062 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 08:29:31.982555   11062 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1213 08:29:31.982716   11062 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 08:29:31.982842   11062 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 08:29:33.164784   11062 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.182519867s
	I1213 08:29:34.101969   11062 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.119816186s
	I1213 08:29:35.983786   11062 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001377853s
	I1213 08:29:35.999243   11062 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 08:29:36.009094   11062 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 08:29:36.017138   11062 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 08:29:36.017408   11062 kubeadm.go:319] [mark-control-plane] Marking the node addons-916029 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 08:29:36.024532   11062 kubeadm.go:319] [bootstrap-token] Using token: h5re1o.w3neybk59b02aves
	I1213 08:29:36.025714   11062 out.go:252]   - Configuring RBAC rules ...
	I1213 08:29:36.025824   11062 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 08:29:36.028532   11062 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 08:29:36.033112   11062 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 08:29:36.036330   11062 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 08:29:36.038711   11062 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 08:29:36.040878   11062 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 08:29:36.389845   11062 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 08:29:36.803732   11062 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 08:29:37.391873   11062 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 08:29:37.392718   11062 kubeadm.go:319] 
	I1213 08:29:37.392811   11062 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 08:29:37.392822   11062 kubeadm.go:319] 
	I1213 08:29:37.392934   11062 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 08:29:37.392962   11062 kubeadm.go:319] 
	I1213 08:29:37.393018   11062 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 08:29:37.393099   11062 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 08:29:37.393168   11062 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 08:29:37.393195   11062 kubeadm.go:319] 
	I1213 08:29:37.393266   11062 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 08:29:37.393275   11062 kubeadm.go:319] 
	I1213 08:29:37.393335   11062 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 08:29:37.393344   11062 kubeadm.go:319] 
	I1213 08:29:37.393424   11062 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 08:29:37.393581   11062 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 08:29:37.393701   11062 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 08:29:37.393710   11062 kubeadm.go:319] 
	I1213 08:29:37.393812   11062 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 08:29:37.393928   11062 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 08:29:37.393936   11062 kubeadm.go:319] 
	I1213 08:29:37.394052   11062 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token h5re1o.w3neybk59b02aves \
	I1213 08:29:37.394169   11062 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ee58f815f85fc315c500e095f56504e491b6ed949bed649ee5693cfd8113bd8c \
	I1213 08:29:37.394194   11062 kubeadm.go:319] 	--control-plane 
	I1213 08:29:37.394207   11062 kubeadm.go:319] 
	I1213 08:29:37.394316   11062 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 08:29:37.394323   11062 kubeadm.go:319] 
	I1213 08:29:37.394418   11062 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token h5re1o.w3neybk59b02aves \
	I1213 08:29:37.394591   11062 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ee58f815f85fc315c500e095f56504e491b6ed949bed649ee5693cfd8113bd8c 
	I1213 08:29:37.396351   11062 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1213 08:29:37.396457   11062 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 08:29:37.396501   11062 cni.go:84] Creating CNI manager for ""
	I1213 08:29:37.396515   11062 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 08:29:37.399202   11062 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 08:29:37.400376   11062 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 08:29:37.404544   11062 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 08:29:37.404563   11062 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 08:29:37.416939   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 08:29:37.612049   11062 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 08:29:37.612128   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:37.612172   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-916029 minikube.k8s.io/updated_at=2025_12_13T08_29_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=addons-916029 minikube.k8s.io/primary=true
	I1213 08:29:37.622775   11062 ops.go:34] apiserver oom_adj: -16
	I1213 08:29:37.697240   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:38.197554   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:38.697476   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:39.198035   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:39.697347   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:40.198073   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:40.697616   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:41.197698   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:41.697916   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:42.197356   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:42.260586   11062 kubeadm.go:1114] duration metric: took 4.648516896s to wait for elevateKubeSystemPrivileges
	I1213 08:29:42.260623   11062 kubeadm.go:403] duration metric: took 13.617549011s to StartCluster
	I1213 08:29:42.260642   11062 settings.go:142] acquiring lock: {Name:mkb7db33d439ccf76620dab7051026432361ebb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:42.260795   11062 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 08:29:42.261315   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:42.261542   11062 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 08:29:42.261570   11062 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 08:29:42.261639   11062 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 08:29:42.261746   11062 addons.go:70] Setting yakd=true in profile "addons-916029"
	I1213 08:29:42.261747   11062 addons.go:70] Setting gcp-auth=true in profile "addons-916029"
	I1213 08:29:42.261793   11062 addons.go:239] Setting addon yakd=true in "addons-916029"
	I1213 08:29:42.261802   11062 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:29:42.261811   11062 mustload.go:66] Loading cluster: addons-916029
	I1213 08:29:42.261830   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.261804   11062 addons.go:70] Setting registry=true in profile "addons-916029"
	I1213 08:29:42.261830   11062 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-916029"
	I1213 08:29:42.261877   11062 addons.go:239] Setting addon registry=true in "addons-916029"
	I1213 08:29:42.261886   11062 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-916029"
	I1213 08:29:42.261935   11062 addons.go:70] Setting inspektor-gadget=true in profile "addons-916029"
	I1213 08:29:42.261951   11062 addons.go:239] Setting addon inspektor-gadget=true in "addons-916029"
	I1213 08:29:42.261958   11062 addons.go:70] Setting metrics-server=true in profile "addons-916029"
	I1213 08:29:42.261931   11062 addons.go:70] Setting volcano=true in profile "addons-916029"
	I1213 08:29:42.261973   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.261979   11062 addons.go:239] Setting addon metrics-server=true in "addons-916029"
	I1213 08:29:42.261996   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.262005   11062 addons.go:239] Setting addon volcano=true in "addons-916029"
	I1213 08:29:42.262051   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.262077   11062 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:29:42.262316   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.262326   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.262396   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.262479   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.262507   11062 addons.go:70] Setting registry-creds=true in profile "addons-916029"
	I1213 08:29:42.262524   11062 addons.go:239] Setting addon registry-creds=true in "addons-916029"
	I1213 08:29:42.262543   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.262647   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.262702   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.263050   11062 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-916029"
	I1213 08:29:42.263076   11062 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-916029"
	I1213 08:29:42.263101   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.263852   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.264277   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.264461   11062 addons.go:70] Setting ingress=true in profile "addons-916029"
	I1213 08:29:42.264476   11062 addons.go:239] Setting addon ingress=true in "addons-916029"
	I1213 08:29:42.264525   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.264609   11062 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-916029"
	I1213 08:29:42.264634   11062 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-916029"
	I1213 08:29:42.264765   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.264983   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.265030   11062 addons.go:70] Setting volumesnapshots=true in profile "addons-916029"
	I1213 08:29:42.265047   11062 addons.go:239] Setting addon volumesnapshots=true in "addons-916029"
	I1213 08:29:42.265093   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.265590   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.265695   11062 out.go:179] * Verifying Kubernetes components...
	I1213 08:29:42.265928   11062 addons.go:70] Setting storage-provisioner=true in profile "addons-916029"
	I1213 08:29:42.265966   11062 addons.go:239] Setting addon storage-provisioner=true in "addons-916029"
	I1213 08:29:42.266005   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.269924   11062 addons.go:70] Setting default-storageclass=true in profile "addons-916029"
	I1213 08:29:42.269970   11062 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-916029"
	I1213 08:29:42.275697   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.278297   11062 addons.go:70] Setting cloud-spanner=true in profile "addons-916029"
	I1213 08:29:42.278364   11062 addons.go:239] Setting addon cloud-spanner=true in "addons-916029"
	I1213 08:29:42.278403   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.279180   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.280716   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.281222   11062 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-916029"
	I1213 08:29:42.261939   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.282881   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.283062   11062 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-916029"
	I1213 08:29:42.283118   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.283719   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.284036   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.282118   11062 addons.go:70] Setting ingress-dns=true in profile "addons-916029"
	I1213 08:29:42.284857   11062 addons.go:239] Setting addon ingress-dns=true in "addons-916029"
	I1213 08:29:42.284935   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.289309   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.290992   11062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:29:42.315265   11062 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1213 08:29:42.319021   11062 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 08:29:42.319047   11062 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 08:29:42.319116   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.322205   11062 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-916029"
	I1213 08:29:42.322255   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.325342   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.334593   11062 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1213 08:29:42.338657   11062 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 08:29:42.338681   11062 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 08:29:42.338757   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.343923   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.347113   11062 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 08:29:42.348564   11062 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 08:29:42.348583   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 08:29:42.348638   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.348684   11062 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1213 08:29:42.350426   11062 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 08:29:42.350580   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1213 08:29:42.350849   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	W1213 08:29:42.365235   11062 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 08:29:42.375419   11062 addons.go:239] Setting addon default-storageclass=true in "addons-916029"
	I1213 08:29:42.375954   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.376855   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.378004   11062 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1213 08:29:42.378065   11062 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1213 08:29:42.378084   11062 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1213 08:29:42.378099   11062 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1213 08:29:42.382023   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 08:29:42.383341   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 08:29:42.383401   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 08:29:42.383644   11062 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 08:29:42.383662   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1213 08:29:42.383720   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.383880   11062 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 08:29:42.383894   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1213 08:29:42.383927   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.384071   11062 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 08:29:42.384082   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 08:29:42.384112   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.385503   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 08:29:42.385710   11062 out.go:179]   - Using image docker.io/registry:3.0.0
	I1213 08:29:42.385737   11062 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:29:42.385813   11062 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1213 08:29:42.385845   11062 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 08:29:42.387302   11062 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 08:29:42.387355   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.387103   11062 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 08:29:42.387569   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 08:29:42.387148   11062 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:29:42.387600   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 08:29:42.387643   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.387959   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.388615   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 08:29:42.391632   11062 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 08:29:42.392830   11062 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 08:29:42.393359   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 08:29:42.394136   11062 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 08:29:42.394159   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 08:29:42.394317   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.396150   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 08:29:42.400112   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 08:29:42.401463   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 08:29:42.402958   11062 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 08:29:42.402980   11062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 08:29:42.403047   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.409143   11062 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 08:29:42.409517   11062 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1213 08:29:42.410471   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.411770   11062 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 08:29:42.411956   11062 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1213 08:29:42.411968   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 08:29:42.412020   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.415252   11062 out.go:179]   - Using image docker.io/busybox:stable
	I1213 08:29:42.416462   11062 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 08:29:42.416478   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 08:29:42.416611   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.444217   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.444511   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.444609   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.451686   11062 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 08:29:42.451987   11062 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 08:29:42.452073   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.453130   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.454807   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.455478   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.459272   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.464569   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.468583   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.473147   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.484653   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.485577   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.486560   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.493548   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	W1213 08:29:42.495451   11062 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 08:29:42.495587   11062 retry.go:31] will retry after 238.053176ms: ssh: handshake failed: EOF
	I1213 08:29:42.500856   11062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:29:42.585945   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 08:29:42.607531   11062 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 08:29:42.607555   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 08:29:42.608951   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:29:42.615637   11062 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 08:29:42.615659   11062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 08:29:42.629881   11062 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 08:29:42.629903   11062 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 08:29:42.630220   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 08:29:42.636360   11062 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 08:29:42.636382   11062 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 08:29:42.643430   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 08:29:42.644434   11062 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 08:29:42.644451   11062 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 08:29:42.648651   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 08:29:42.649387   11062 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 08:29:42.649411   11062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 08:29:42.651203   11062 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 08:29:42.651221   11062 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 08:29:42.653380   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 08:29:42.663524   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 08:29:42.671575   11062 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 08:29:42.671598   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 08:29:42.676085   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 08:29:42.676617   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:29:42.681684   11062 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 08:29:42.681709   11062 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 08:29:42.701223   11062 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 08:29:42.701249   11062 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 08:29:42.704407   11062 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 08:29:42.704432   11062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 08:29:42.704702   11062 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 08:29:42.704724   11062 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 08:29:42.716942   11062 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1213 08:29:42.717798   11062 node_ready.go:35] waiting up to 6m0s for node "addons-916029" to be "Ready" ...
	I1213 08:29:42.723589   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 08:29:42.727742   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 08:29:42.758616   11062 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 08:29:42.758661   11062 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 08:29:42.770022   11062 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 08:29:42.770053   11062 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 08:29:42.779875   11062 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 08:29:42.779910   11062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 08:29:42.816659   11062 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 08:29:42.816756   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 08:29:42.820551   11062 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 08:29:42.820626   11062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 08:29:42.831279   11062 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 08:29:42.831302   11062 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 08:29:42.868870   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 08:29:42.869732   11062 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 08:29:42.869753   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 08:29:42.898798   11062 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 08:29:42.898823   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 08:29:42.932220   11062 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 08:29:42.932271   11062 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 08:29:42.941338   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 08:29:42.960138   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 08:29:42.984454   11062 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 08:29:42.984477   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 08:29:43.023740   11062 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 08:29:43.023842   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 08:29:43.078113   11062 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 08:29:43.078223   11062 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1213 08:29:43.111521   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 08:29:43.228716   11062 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-916029" context rescaled to 1 replicas
	I1213 08:29:43.810695   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.157277498s)
	I1213 08:29:43.810728   11062 addons.go:495] Verifying addon ingress=true in "addons-916029"
	I1213 08:29:43.810936   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.147379833s)
	I1213 08:29:43.811215   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.135074015s)
	I1213 08:29:43.811463   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.134821754s)
	I1213 08:29:43.811618   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.08793918s)
	I1213 08:29:43.811636   11062 addons.go:495] Verifying addon registry=true in "addons-916029"
	I1213 08:29:43.811833   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.084015903s)
	I1213 08:29:43.811986   11062 addons.go:495] Verifying addon metrics-server=true in "addons-916029"
	I1213 08:29:43.812285   11062 out.go:179] * Verifying ingress addon...
	I1213 08:29:43.813094   11062 out.go:179] * Verifying registry addon...
	I1213 08:29:43.813157   11062 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-916029 service yakd-dashboard -n yakd-dashboard
	
	I1213 08:29:43.814635   11062 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 08:29:43.816059   11062 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 08:29:43.819875   11062 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 08:29:43.819893   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:43.820039   11062 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 08:29:43.820059   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1213 08:29:43.827056   11062 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1213 08:29:44.212680   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.252491004s)
	W1213 08:29:44.212742   11062 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 08:29:44.212766   11062 retry.go:31] will retry after 348.428289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 08:29:44.212985   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.101379779s)
	I1213 08:29:44.213008   11062 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-916029"
	I1213 08:29:44.215272   11062 out.go:179] * Verifying csi-hostpath-driver addon...
	I1213 08:29:44.217400   11062 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 08:29:44.221241   11062 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 08:29:44.221264   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:44.321743   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:44.321928   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:44.561521   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 08:29:44.720621   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 08:29:44.720783   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:29:44.817722   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:44.819048   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:45.221063   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:45.321884   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:45.322087   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:45.720559   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:45.821248   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:45.821512   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:46.221044   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:46.317956   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:46.318385   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:46.720692   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:46.821053   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:46.821342   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:47.023475   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.461916079s)
	W1213 08:29:47.220505   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:29:47.220626   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:47.321261   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:47.321436   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:47.720386   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:47.820665   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:47.820730   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:48.220450   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:48.319995   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:48.320105   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:48.720925   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:48.821473   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:48.821640   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:49.220782   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:49.317522   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:49.318997   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1213 08:29:49.720522   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:29:49.720643   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:49.822265   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:49.822471   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:49.949647   11062 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 08:29:49.949705   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:49.967364   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:50.067308   11062 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 08:29:50.079458   11062 addons.go:239] Setting addon gcp-auth=true in "addons-916029"
	I1213 08:29:50.079512   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:50.079834   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:50.097255   11062 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 08:29:50.097395   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:50.114777   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:50.208253   11062 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 08:29:50.209494   11062 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 08:29:50.210689   11062 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 08:29:50.210701   11062 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 08:29:50.220550   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:50.224251   11062 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 08:29:50.224268   11062 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 08:29:50.236189   11062 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 08:29:50.236209   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 08:29:50.248172   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 08:29:50.318020   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:50.318716   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:50.538781   11062 addons.go:495] Verifying addon gcp-auth=true in "addons-916029"
	I1213 08:29:50.540141   11062 out.go:179] * Verifying gcp-auth addon...
	I1213 08:29:50.542002   11062 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 08:29:50.544266   11062 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 08:29:50.544281   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:50.721273   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:50.817883   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:50.818405   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:51.044765   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:51.220309   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:51.318214   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:51.318716   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:51.545449   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:51.719920   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 08:29:51.720599   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:29:51.818099   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:51.818708   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:52.045512   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:52.219934   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:52.317734   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:52.318103   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:52.544811   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:52.720756   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:52.817389   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:52.818865   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:53.044457   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:53.219906   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:53.317626   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:53.318241   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:53.544929   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1213 08:29:53.720678   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:29:53.720807   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:53.817134   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:53.818558   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:54.044977   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:54.220601   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:54.318328   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:54.318853   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:54.544407   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:54.720212   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:54.818098   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:54.818836   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:55.045511   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:55.219937   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:55.317313   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:55.318918   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:55.544451   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:55.720282   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:55.817928   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:55.818577   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:56.045141   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1213 08:29:56.220609   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:29:56.220710   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:56.318014   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:56.318830   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:56.545465   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:56.720280   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:56.817295   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:56.818727   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:57.045911   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:57.220343   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:57.318393   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:57.318498   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:57.545172   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:57.720862   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:57.817350   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:57.818871   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:58.045559   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:58.219901   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:58.317673   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:58.318192   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:58.544932   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1213 08:29:58.720579   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:29:58.720860   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:58.817645   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:58.818207   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:59.044724   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:59.220297   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:59.317879   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:59.318357   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:59.544927   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:59.720547   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:59.818001   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:59.818595   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:00.045062   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:00.220784   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:00.317284   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:00.318739   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:00.545308   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1213 08:30:00.720707   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:00.720969   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:00.817988   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:00.818430   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:01.045134   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:01.220710   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:01.317281   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:01.319256   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:01.544805   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:01.720258   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:01.818241   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:01.818848   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:02.044397   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:02.220013   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:02.317430   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:02.318980   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:02.544560   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:02.720342   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:02.818065   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:02.818593   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:03.045156   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1213 08:30:03.220827   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:03.220931   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:03.317401   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:03.318888   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:03.544630   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:03.720094   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:03.817813   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:03.818285   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:04.044909   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:04.220575   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:04.318025   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:04.318821   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:04.545507   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:04.720055   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:04.818236   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:04.818308   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:05.044848   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:05.220319   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:05.317763   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:05.318465   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:05.545137   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1213 08:30:05.720550   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:05.720695   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:05.817504   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:05.819030   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:06.044563   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:06.220341   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:06.317955   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:06.318412   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:06.545110   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:06.720888   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:06.818236   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:06.819120   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:07.044937   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:07.220558   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:07.318232   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:07.318789   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:07.545613   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:07.720689   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:07.817351   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:07.818971   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:08.044420   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:08.220053   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 08:30:08.220662   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:08.318072   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:08.318763   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:08.544509   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:08.720114   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:08.818262   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:08.818913   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:09.044381   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:09.220015   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:09.317250   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:09.318874   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:09.545439   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:09.719951   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:09.818229   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:09.818686   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:10.045455   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:10.220056   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 08:30:10.220737   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:10.317099   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:10.318796   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:10.545455   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:10.720004   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:10.817213   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:10.818903   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:11.044399   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:11.219838   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:11.317585   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:11.318334   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:11.544897   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:11.720530   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:11.818475   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:11.818645   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:12.045211   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:12.220800   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:12.317580   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:12.319020   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:12.544743   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1213 08:30:12.720156   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:12.720238   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:12.818017   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:12.818471   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:13.045198   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:13.220669   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:13.317369   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:13.319166   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:13.544921   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:13.720745   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:13.817206   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:13.818861   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:14.045393   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:14.219825   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:14.318251   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:14.318633   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:14.545355   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:14.719980   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:14.819171   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:14.819298   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:15.045383   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:15.220886   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 08:30:15.220914   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:15.317646   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:15.318357   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:15.545244   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:15.721078   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:15.817865   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:15.818253   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:16.044911   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:16.220455   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:16.317914   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:16.318625   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:16.545375   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:16.720872   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:16.817768   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:16.818182   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:17.044760   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:17.220311   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:17.318180   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:17.318480   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:17.545124   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:17.720649   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 08:30:17.720665   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:17.818099   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:17.818807   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:18.045518   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:18.220213   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:18.317668   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:18.318404   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:18.545247   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:18.720816   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:18.817272   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:18.818901   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:19.044364   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:19.219956   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:19.317317   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:19.318848   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:19.545154   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:19.720650   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:19.817292   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:19.819340   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:20.044826   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:20.220590   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 08:30:20.220590   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:20.318065   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:20.318778   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:20.545432   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:20.720175   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:20.818307   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:20.818992   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:21.044409   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:21.219797   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:21.318300   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:21.318847   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:21.545325   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:21.720866   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:21.817534   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:21.819175   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:22.044755   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:22.220304   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:22.318114   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:22.318596   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:22.545675   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:22.720195   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 08:30:22.720243   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:22.817801   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:22.818482   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:23.045012   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:23.220846   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:23.317509   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:23.318119   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:23.545129   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:23.720857   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:23.817617   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:23.819108   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:24.044512   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:24.220098   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:24.319106   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:24.321811   11062 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 08:30:24.321835   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:24.545174   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:24.723012   11062 node_ready.go:49] node "addons-916029" is "Ready"
	I1213 08:30:24.723045   11062 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 08:30:24.723050   11062 node_ready.go:38] duration metric: took 42.00523054s for node "addons-916029" to be "Ready" ...
	I1213 08:30:24.723060   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:24.723072   11062 api_server.go:52] waiting for apiserver process to appear ...
	I1213 08:30:24.723156   11062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:30:24.742804   11062 api_server.go:72] duration metric: took 42.481197036s to wait for apiserver process to appear ...
	I1213 08:30:24.742832   11062 api_server.go:88] waiting for apiserver healthz status ...
	I1213 08:30:24.742859   11062 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1213 08:30:24.748144   11062 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1213 08:30:24.749200   11062 api_server.go:141] control plane version: v1.34.2
	I1213 08:30:24.749229   11062 api_server.go:131] duration metric: took 6.388845ms to wait for apiserver health ...
	I1213 08:30:24.749240   11062 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 08:30:24.822215   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:24.822244   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:24.824723   11062 system_pods.go:59] 20 kube-system pods found
	I1213 08:30:24.824819   11062 system_pods.go:61] "amd-gpu-device-plugin-vwtp8" [9c59b49f-4ccd-41d7-a843-2e0044c03209] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 08:30:24.824834   11062 system_pods.go:61] "coredns-66bc5c9577-lp9sl" [00ef7b8e-0135-493f-922d-344c52c4baed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 08:30:24.824854   11062 system_pods.go:61] "csi-hostpath-attacher-0" [bad8e904-0537-4df7-9c54-a019ca492be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 08:30:24.824863   11062 system_pods.go:61] "csi-hostpath-resizer-0" [1b0fe61b-a000-4634-9e4b-035aa0d6f505] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 08:30:24.824879   11062 system_pods.go:61] "csi-hostpathplugin-btrm5" [0ba2f162-e12c-49c5-baa7-d4fd92d5a90e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 08:30:24.824887   11062 system_pods.go:61] "etcd-addons-916029" [208f86d1-2d1e-4d79-a5ee-f590c29596e1] Running
	I1213 08:30:24.824901   11062 system_pods.go:61] "kindnet-qpw8x" [72c26dae-5349-4b65-a4c7-18f040fb6031] Running
	I1213 08:30:24.824910   11062 system_pods.go:61] "kube-apiserver-addons-916029" [ca20d598-d9b7-49a9-b276-72c2a8862f22] Running
	I1213 08:30:24.824916   11062 system_pods.go:61] "kube-controller-manager-addons-916029" [60b922fa-1bec-4e0a-80f7-3ac3b3f1dc8a] Running
	I1213 08:30:24.824939   11062 system_pods.go:61] "kube-ingress-dns-minikube" [df5cf388-860f-4482-be0b-dc78781a80a1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 08:30:24.824949   11062 system_pods.go:61] "kube-proxy-kr7zc" [b0afa5ad-6da8-4a6f-9f27-a80a864f2cd0] Running
	I1213 08:30:24.824956   11062 system_pods.go:61] "kube-scheduler-addons-916029" [b9c61963-82d5-4bc1-80f8-b26393c435b8] Running
	I1213 08:30:24.824974   11062 system_pods.go:61] "metrics-server-85b7d694d7-zrm5x" [b5b354f9-a97f-4873-8ef3-19058bdced38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 08:30:24.824988   11062 system_pods.go:61] "nvidia-device-plugin-daemonset-ss6tf" [31114449-a40e-4a7c-a76a-da5a506f3892] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 08:30:24.825003   11062 system_pods.go:61] "registry-6b586f9694-xvfhz" [0232b267-4d81-470a-80c9-4f84718b005f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 08:30:24.825023   11062 system_pods.go:61] "registry-creds-764b6fb674-vj2wj" [d277eaca-8fc8-4604-81bc-7e6c4ab2feeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 08:30:24.825036   11062 system_pods.go:61] "registry-proxy-cd6hw" [2ed47d8d-3ce8-4770-8877-300d97e12e3a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 08:30:24.825054   11062 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4d65q" [03557cbf-b655-4ef1-ac50-79bf69649e3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:24.825065   11062 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wfsd6" [0f6d1e6f-b032-4afa-afb5-1a02cb7ea87b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:24.825074   11062 system_pods.go:61] "storage-provisioner" [8fbeefe7-3a90-470b-96ac-8422ad3a8592] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 08:30:24.825082   11062 system_pods.go:74] duration metric: took 75.834975ms to wait for pod list to return data ...
	I1213 08:30:24.825097   11062 default_sa.go:34] waiting for default service account to be created ...
	I1213 08:30:24.827448   11062 default_sa.go:45] found service account: "default"
	I1213 08:30:24.827472   11062 default_sa.go:55] duration metric: took 2.369082ms for default service account to be created ...
	I1213 08:30:24.827500   11062 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 08:30:24.923932   11062 system_pods.go:86] 20 kube-system pods found
	I1213 08:30:24.923968   11062 system_pods.go:89] "amd-gpu-device-plugin-vwtp8" [9c59b49f-4ccd-41d7-a843-2e0044c03209] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 08:30:24.923979   11062 system_pods.go:89] "coredns-66bc5c9577-lp9sl" [00ef7b8e-0135-493f-922d-344c52c4baed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 08:30:24.923988   11062 system_pods.go:89] "csi-hostpath-attacher-0" [bad8e904-0537-4df7-9c54-a019ca492be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 08:30:24.923996   11062 system_pods.go:89] "csi-hostpath-resizer-0" [1b0fe61b-a000-4634-9e4b-035aa0d6f505] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 08:30:24.924004   11062 system_pods.go:89] "csi-hostpathplugin-btrm5" [0ba2f162-e12c-49c5-baa7-d4fd92d5a90e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 08:30:24.924014   11062 system_pods.go:89] "etcd-addons-916029" [208f86d1-2d1e-4d79-a5ee-f590c29596e1] Running
	I1213 08:30:24.924026   11062 system_pods.go:89] "kindnet-qpw8x" [72c26dae-5349-4b65-a4c7-18f040fb6031] Running
	I1213 08:30:24.924035   11062 system_pods.go:89] "kube-apiserver-addons-916029" [ca20d598-d9b7-49a9-b276-72c2a8862f22] Running
	I1213 08:30:24.924042   11062 system_pods.go:89] "kube-controller-manager-addons-916029" [60b922fa-1bec-4e0a-80f7-3ac3b3f1dc8a] Running
	I1213 08:30:24.924056   11062 system_pods.go:89] "kube-ingress-dns-minikube" [df5cf388-860f-4482-be0b-dc78781a80a1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 08:30:24.924064   11062 system_pods.go:89] "kube-proxy-kr7zc" [b0afa5ad-6da8-4a6f-9f27-a80a864f2cd0] Running
	I1213 08:30:24.924072   11062 system_pods.go:89] "kube-scheduler-addons-916029" [b9c61963-82d5-4bc1-80f8-b26393c435b8] Running
	I1213 08:30:24.924083   11062 system_pods.go:89] "metrics-server-85b7d694d7-zrm5x" [b5b354f9-a97f-4873-8ef3-19058bdced38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 08:30:24.924096   11062 system_pods.go:89] "nvidia-device-plugin-daemonset-ss6tf" [31114449-a40e-4a7c-a76a-da5a506f3892] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 08:30:24.924108   11062 system_pods.go:89] "registry-6b586f9694-xvfhz" [0232b267-4d81-470a-80c9-4f84718b005f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 08:30:24.924117   11062 system_pods.go:89] "registry-creds-764b6fb674-vj2wj" [d277eaca-8fc8-4604-81bc-7e6c4ab2feeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 08:30:24.924128   11062 system_pods.go:89] "registry-proxy-cd6hw" [2ed47d8d-3ce8-4770-8877-300d97e12e3a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 08:30:24.924140   11062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4d65q" [03557cbf-b655-4ef1-ac50-79bf69649e3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:24.924153   11062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wfsd6" [0f6d1e6f-b032-4afa-afb5-1a02cb7ea87b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:24.924166   11062 system_pods.go:89] "storage-provisioner" [8fbeefe7-3a90-470b-96ac-8422ad3a8592] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 08:30:24.924197   11062 retry.go:31] will retry after 300.692208ms: missing components: kube-dns
	I1213 08:30:25.045996   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:25.223089   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:25.230000   11062 system_pods.go:86] 20 kube-system pods found
	I1213 08:30:25.230037   11062 system_pods.go:89] "amd-gpu-device-plugin-vwtp8" [9c59b49f-4ccd-41d7-a843-2e0044c03209] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 08:30:25.230049   11062 system_pods.go:89] "coredns-66bc5c9577-lp9sl" [00ef7b8e-0135-493f-922d-344c52c4baed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 08:30:25.230058   11062 system_pods.go:89] "csi-hostpath-attacher-0" [bad8e904-0537-4df7-9c54-a019ca492be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 08:30:25.230066   11062 system_pods.go:89] "csi-hostpath-resizer-0" [1b0fe61b-a000-4634-9e4b-035aa0d6f505] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 08:30:25.230074   11062 system_pods.go:89] "csi-hostpathplugin-btrm5" [0ba2f162-e12c-49c5-baa7-d4fd92d5a90e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 08:30:25.230093   11062 system_pods.go:89] "etcd-addons-916029" [208f86d1-2d1e-4d79-a5ee-f590c29596e1] Running
	I1213 08:30:25.230100   11062 system_pods.go:89] "kindnet-qpw8x" [72c26dae-5349-4b65-a4c7-18f040fb6031] Running
	I1213 08:30:25.230106   11062 system_pods.go:89] "kube-apiserver-addons-916029" [ca20d598-d9b7-49a9-b276-72c2a8862f22] Running
	I1213 08:30:25.230112   11062 system_pods.go:89] "kube-controller-manager-addons-916029" [60b922fa-1bec-4e0a-80f7-3ac3b3f1dc8a] Running
	I1213 08:30:25.230122   11062 system_pods.go:89] "kube-ingress-dns-minikube" [df5cf388-860f-4482-be0b-dc78781a80a1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 08:30:25.230128   11062 system_pods.go:89] "kube-proxy-kr7zc" [b0afa5ad-6da8-4a6f-9f27-a80a864f2cd0] Running
	I1213 08:30:25.230134   11062 system_pods.go:89] "kube-scheduler-addons-916029" [b9c61963-82d5-4bc1-80f8-b26393c435b8] Running
	I1213 08:30:25.230145   11062 system_pods.go:89] "metrics-server-85b7d694d7-zrm5x" [b5b354f9-a97f-4873-8ef3-19058bdced38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 08:30:25.230155   11062 system_pods.go:89] "nvidia-device-plugin-daemonset-ss6tf" [31114449-a40e-4a7c-a76a-da5a506f3892] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 08:30:25.230163   11062 system_pods.go:89] "registry-6b586f9694-xvfhz" [0232b267-4d81-470a-80c9-4f84718b005f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 08:30:25.230171   11062 system_pods.go:89] "registry-creds-764b6fb674-vj2wj" [d277eaca-8fc8-4604-81bc-7e6c4ab2feeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 08:30:25.230179   11062 system_pods.go:89] "registry-proxy-cd6hw" [2ed47d8d-3ce8-4770-8877-300d97e12e3a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 08:30:25.230187   11062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4d65q" [03557cbf-b655-4ef1-ac50-79bf69649e3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:25.230195   11062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wfsd6" [0f6d1e6f-b032-4afa-afb5-1a02cb7ea87b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:25.230210   11062 system_pods.go:89] "storage-provisioner" [8fbeefe7-3a90-470b-96ac-8422ad3a8592] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 08:30:25.230226   11062 retry.go:31] will retry after 242.687821ms: missing components: kube-dns
	I1213 08:30:25.319034   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:25.319959   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:25.478139   11062 system_pods.go:86] 20 kube-system pods found
	I1213 08:30:25.478186   11062 system_pods.go:89] "amd-gpu-device-plugin-vwtp8" [9c59b49f-4ccd-41d7-a843-2e0044c03209] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 08:30:25.478199   11062 system_pods.go:89] "coredns-66bc5c9577-lp9sl" [00ef7b8e-0135-493f-922d-344c52c4baed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 08:30:25.478210   11062 system_pods.go:89] "csi-hostpath-attacher-0" [bad8e904-0537-4df7-9c54-a019ca492be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 08:30:25.478219   11062 system_pods.go:89] "csi-hostpath-resizer-0" [1b0fe61b-a000-4634-9e4b-035aa0d6f505] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 08:30:25.478228   11062 system_pods.go:89] "csi-hostpathplugin-btrm5" [0ba2f162-e12c-49c5-baa7-d4fd92d5a90e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 08:30:25.478234   11062 system_pods.go:89] "etcd-addons-916029" [208f86d1-2d1e-4d79-a5ee-f590c29596e1] Running
	I1213 08:30:25.478242   11062 system_pods.go:89] "kindnet-qpw8x" [72c26dae-5349-4b65-a4c7-18f040fb6031] Running
	I1213 08:30:25.478250   11062 system_pods.go:89] "kube-apiserver-addons-916029" [ca20d598-d9b7-49a9-b276-72c2a8862f22] Running
	I1213 08:30:25.478256   11062 system_pods.go:89] "kube-controller-manager-addons-916029" [60b922fa-1bec-4e0a-80f7-3ac3b3f1dc8a] Running
	I1213 08:30:25.478265   11062 system_pods.go:89] "kube-ingress-dns-minikube" [df5cf388-860f-4482-be0b-dc78781a80a1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 08:30:25.478270   11062 system_pods.go:89] "kube-proxy-kr7zc" [b0afa5ad-6da8-4a6f-9f27-a80a864f2cd0] Running
	I1213 08:30:25.478277   11062 system_pods.go:89] "kube-scheduler-addons-916029" [b9c61963-82d5-4bc1-80f8-b26393c435b8] Running
	I1213 08:30:25.478286   11062 system_pods.go:89] "metrics-server-85b7d694d7-zrm5x" [b5b354f9-a97f-4873-8ef3-19058bdced38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 08:30:25.478294   11062 system_pods.go:89] "nvidia-device-plugin-daemonset-ss6tf" [31114449-a40e-4a7c-a76a-da5a506f3892] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 08:30:25.478310   11062 system_pods.go:89] "registry-6b586f9694-xvfhz" [0232b267-4d81-470a-80c9-4f84718b005f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 08:30:25.478318   11062 system_pods.go:89] "registry-creds-764b6fb674-vj2wj" [d277eaca-8fc8-4604-81bc-7e6c4ab2feeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 08:30:25.478325   11062 system_pods.go:89] "registry-proxy-cd6hw" [2ed47d8d-3ce8-4770-8877-300d97e12e3a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 08:30:25.478335   11062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4d65q" [03557cbf-b655-4ef1-ac50-79bf69649e3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:25.478346   11062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wfsd6" [0f6d1e6f-b032-4afa-afb5-1a02cb7ea87b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:25.478354   11062 system_pods.go:89] "storage-provisioner" [8fbeefe7-3a90-470b-96ac-8422ad3a8592] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 08:30:25.478372   11062 retry.go:31] will retry after 482.920653ms: missing components: kube-dns
	I1213 08:30:25.545686   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:25.720945   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:25.821274   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:25.821454   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:25.966471   11062 system_pods.go:86] 20 kube-system pods found
	I1213 08:30:25.966522   11062 system_pods.go:89] "amd-gpu-device-plugin-vwtp8" [9c59b49f-4ccd-41d7-a843-2e0044c03209] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 08:30:25.966532   11062 system_pods.go:89] "coredns-66bc5c9577-lp9sl" [00ef7b8e-0135-493f-922d-344c52c4baed] Running
	I1213 08:30:25.966544   11062 system_pods.go:89] "csi-hostpath-attacher-0" [bad8e904-0537-4df7-9c54-a019ca492be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 08:30:25.966554   11062 system_pods.go:89] "csi-hostpath-resizer-0" [1b0fe61b-a000-4634-9e4b-035aa0d6f505] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 08:30:25.966563   11062 system_pods.go:89] "csi-hostpathplugin-btrm5" [0ba2f162-e12c-49c5-baa7-d4fd92d5a90e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 08:30:25.966570   11062 system_pods.go:89] "etcd-addons-916029" [208f86d1-2d1e-4d79-a5ee-f590c29596e1] Running
	I1213 08:30:25.966577   11062 system_pods.go:89] "kindnet-qpw8x" [72c26dae-5349-4b65-a4c7-18f040fb6031] Running
	I1213 08:30:25.966585   11062 system_pods.go:89] "kube-apiserver-addons-916029" [ca20d598-d9b7-49a9-b276-72c2a8862f22] Running
	I1213 08:30:25.966592   11062 system_pods.go:89] "kube-controller-manager-addons-916029" [60b922fa-1bec-4e0a-80f7-3ac3b3f1dc8a] Running
	I1213 08:30:25.966602   11062 system_pods.go:89] "kube-ingress-dns-minikube" [df5cf388-860f-4482-be0b-dc78781a80a1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 08:30:25.966609   11062 system_pods.go:89] "kube-proxy-kr7zc" [b0afa5ad-6da8-4a6f-9f27-a80a864f2cd0] Running
	I1213 08:30:25.966616   11062 system_pods.go:89] "kube-scheduler-addons-916029" [b9c61963-82d5-4bc1-80f8-b26393c435b8] Running
	I1213 08:30:25.966625   11062 system_pods.go:89] "metrics-server-85b7d694d7-zrm5x" [b5b354f9-a97f-4873-8ef3-19058bdced38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 08:30:25.966634   11062 system_pods.go:89] "nvidia-device-plugin-daemonset-ss6tf" [31114449-a40e-4a7c-a76a-da5a506f3892] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 08:30:25.966644   11062 system_pods.go:89] "registry-6b586f9694-xvfhz" [0232b267-4d81-470a-80c9-4f84718b005f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 08:30:25.966655   11062 system_pods.go:89] "registry-creds-764b6fb674-vj2wj" [d277eaca-8fc8-4604-81bc-7e6c4ab2feeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 08:30:25.966664   11062 system_pods.go:89] "registry-proxy-cd6hw" [2ed47d8d-3ce8-4770-8877-300d97e12e3a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 08:30:25.966674   11062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4d65q" [03557cbf-b655-4ef1-ac50-79bf69649e3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:25.966686   11062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wfsd6" [0f6d1e6f-b032-4afa-afb5-1a02cb7ea87b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:25.966693   11062 system_pods.go:89] "storage-provisioner" [8fbeefe7-3a90-470b-96ac-8422ad3a8592] Running
	I1213 08:30:25.966705   11062 system_pods.go:126] duration metric: took 1.139196644s to wait for k8s-apps to be running ...
	I1213 08:30:25.966717   11062 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 08:30:25.966779   11062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:30:25.983322   11062 system_svc.go:56] duration metric: took 16.595233ms WaitForService to wait for kubelet
	I1213 08:30:25.983352   11062 kubeadm.go:587] duration metric: took 43.721750528s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 08:30:25.983377   11062 node_conditions.go:102] verifying NodePressure condition ...
	I1213 08:30:25.985760   11062 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 08:30:25.985785   11062 node_conditions.go:123] node cpu capacity is 8
	I1213 08:30:25.985805   11062 node_conditions.go:105] duration metric: took 2.421957ms to run NodePressure ...
	I1213 08:30:25.985821   11062 start.go:242] waiting for startup goroutines ...
	I1213 08:30:26.046084   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:26.221580   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:26.318510   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:26.318755   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:26.545470   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:26.720657   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:26.818712   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:26.820190   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:27.045270   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:27.221587   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:27.318432   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:27.318822   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:27.546047   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:27.721440   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:27.818459   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:27.818891   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:28.045339   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:28.222096   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:28.318181   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:28.318449   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:28.545760   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:28.721148   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:28.817932   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:28.818339   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:29.044782   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:29.220741   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:29.318671   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:29.319050   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:29.544612   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:29.720243   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:29.818006   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:29.818735   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:30.046314   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:30.221056   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:30.319870   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:30.319891   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:30.545695   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:30.720828   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:30.817936   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:30.821037   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:31.045088   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:31.221130   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:31.317913   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:31.318464   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:31.545668   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:31.720791   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:31.818549   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:31.819107   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:32.058222   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:32.221175   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:32.317617   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:32.318469   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:32.545595   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:32.720808   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:32.819085   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:32.819085   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:33.045048   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:33.221401   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:33.318178   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:33.318535   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:33.546357   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:33.721065   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:33.817452   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:33.819064   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:34.045944   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:34.220833   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:34.317528   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:34.319231   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:34.545552   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:34.720585   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:34.818404   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:34.819389   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:35.044914   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:35.222845   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:35.317965   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:35.319289   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:35.545227   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:35.722069   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:35.817159   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:35.818593   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:36.047498   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:36.222898   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:36.317689   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:36.319870   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:36.545743   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:36.720882   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:36.873589   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:36.873679   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:37.127949   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:37.221456   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:37.318139   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:37.318850   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:37.546405   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:37.720509   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:37.818768   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:37.818820   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:38.046268   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:38.221362   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:38.318021   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:38.318862   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:38.545214   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:38.721430   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:38.819019   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:38.819859   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:39.045982   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:39.221010   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:39.317920   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:39.319199   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:39.545189   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:39.721577   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:39.818527   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:39.819378   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:40.045753   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:40.221292   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:40.363864   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:40.363976   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:40.545472   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:40.720557   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:40.820432   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:40.820582   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:41.045349   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:41.220801   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:41.318313   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:41.318963   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:41.544273   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:41.720959   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:41.817248   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:41.818943   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:42.046317   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:42.221419   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:42.320708   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:42.321323   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:42.545553   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:42.721519   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:42.818507   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:42.818897   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:43.045369   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:43.221370   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:43.397914   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:43.398118   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:43.545633   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:43.720381   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:43.818085   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:43.818799   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:44.045393   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:44.220353   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:44.320776   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:44.320914   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:44.545979   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:44.720731   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:44.817364   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:44.818861   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:45.046218   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:45.221309   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:45.318641   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:45.318698   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:45.545996   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:45.721797   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:45.818522   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:45.818896   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:46.046418   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:46.221333   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:46.317882   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:46.318504   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:46.544591   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:46.720102   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:46.817934   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:46.818341   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:47.045951   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:47.221275   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:47.318037   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:47.318577   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:47.545565   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:47.720508   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:47.818817   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:47.818865   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:48.045828   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:48.220723   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:48.318179   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:48.318974   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:48.545000   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:48.721118   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:48.818111   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:48.818645   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:49.046048   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:49.221271   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:49.318099   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:49.318595   11062 kapi.go:107] duration metric: took 1m5.502535555s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 08:30:49.545382   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:49.721686   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:49.818124   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:50.046582   11062 kapi.go:107] duration metric: took 59.504574412s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 08:30:50.048566   11062 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-916029 cluster.
	I1213 08:30:50.051997   11062 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 08:30:50.054289   11062 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 08:30:50.220332   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:50.323059   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:50.723793   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:50.817798   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:51.221837   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:51.318380   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:51.720835   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:51.818325   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:52.245192   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:52.317734   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:52.721143   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:52.817899   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:53.221322   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:53.317821   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:53.721657   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:53.821790   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:54.221309   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:54.317857   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:54.721293   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:54.817593   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:55.220985   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:55.317834   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:55.721235   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:55.822140   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:56.221467   11062 kapi.go:107] duration metric: took 1m12.004069041s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 08:30:56.322576   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:56.820253   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:57.318380   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:57.818060   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:58.318854   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:58.818416   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:59.318968   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:59.818458   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:00.317770   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:00.818547   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:01.318030   11062 kapi.go:107] duration metric: took 1m17.503392746s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 08:31:01.319844   11062 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, inspektor-gadget, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1213 08:31:01.321163   11062 addons.go:530] duration metric: took 1m19.059528306s for enable addons: enabled=[registry-creds nvidia-device-plugin amd-gpu-device-plugin storage-provisioner inspektor-gadget cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1213 08:31:01.321212   11062 start.go:247] waiting for cluster config update ...
	I1213 08:31:01.321240   11062 start.go:256] writing updated cluster config ...
	I1213 08:31:01.321519   11062 ssh_runner.go:195] Run: rm -f paused
	I1213 08:31:01.325385   11062 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 08:31:01.328129   11062 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lp9sl" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:01.331670   11062 pod_ready.go:94] pod "coredns-66bc5c9577-lp9sl" is "Ready"
	I1213 08:31:01.331689   11062 pod_ready.go:86] duration metric: took 3.542ms for pod "coredns-66bc5c9577-lp9sl" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:01.333372   11062 pod_ready.go:83] waiting for pod "etcd-addons-916029" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:01.336499   11062 pod_ready.go:94] pod "etcd-addons-916029" is "Ready"
	I1213 08:31:01.336514   11062 pod_ready.go:86] duration metric: took 3.12678ms for pod "etcd-addons-916029" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:01.338070   11062 pod_ready.go:83] waiting for pod "kube-apiserver-addons-916029" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:01.341312   11062 pod_ready.go:94] pod "kube-apiserver-addons-916029" is "Ready"
	I1213 08:31:01.341332   11062 pod_ready.go:86] duration metric: took 3.245475ms for pod "kube-apiserver-addons-916029" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:01.342848   11062 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-916029" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:01.728162   11062 pod_ready.go:94] pod "kube-controller-manager-addons-916029" is "Ready"
	I1213 08:31:01.728184   11062 pod_ready.go:86] duration metric: took 385.321785ms for pod "kube-controller-manager-addons-916029" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:01.928893   11062 pod_ready.go:83] waiting for pod "kube-proxy-kr7zc" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:02.329346   11062 pod_ready.go:94] pod "kube-proxy-kr7zc" is "Ready"
	I1213 08:31:02.329377   11062 pod_ready.go:86] duration metric: took 400.458359ms for pod "kube-proxy-kr7zc" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:02.529165   11062 pod_ready.go:83] waiting for pod "kube-scheduler-addons-916029" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:02.928836   11062 pod_ready.go:94] pod "kube-scheduler-addons-916029" is "Ready"
	I1213 08:31:02.928860   11062 pod_ready.go:86] duration metric: took 399.667147ms for pod "kube-scheduler-addons-916029" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:02.928875   11062 pod_ready.go:40] duration metric: took 1.603468197s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 08:31:02.976973   11062 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 08:31:02.978919   11062 out.go:179] * Done! kubectl is now configured to use "addons-916029" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.200927089Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-d9jtf/POD" id=cbfcf471-3900-448a-8091-d2e45623cc8d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.201036497Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.209859416Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-d9jtf Namespace:default ID:18bef3c56d9154b94a5485e3741adab5048e20b319654d73afc681d0aa029020 UID:cf9b1b0a-b274-46b3-a1a6-a0c48389a811 NetNS:/var/run/netns/cc97d332-9368-427c-a77e-30a82f87a0ae Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a960}] Aliases:map[]}"
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.209899361Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-d9jtf to CNI network \"kindnet\" (type=ptp)"
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.221395634Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-d9jtf Namespace:default ID:18bef3c56d9154b94a5485e3741adab5048e20b319654d73afc681d0aa029020 UID:cf9b1b0a-b274-46b3-a1a6-a0c48389a811 NetNS:/var/run/netns/cc97d332-9368-427c-a77e-30a82f87a0ae Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a960}] Aliases:map[]}"
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.221577516Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-d9jtf for CNI network kindnet (type=ptp)"
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.222655806Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.223590004Z" level=info msg="Ran pod sandbox 18bef3c56d9154b94a5485e3741adab5048e20b319654d73afc681d0aa029020 with infra container: default/hello-world-app-5d498dc89-d9jtf/POD" id=cbfcf471-3900-448a-8091-d2e45623cc8d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.224753089Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=6b0f1592-aaff-45fe-bce3-f0eb6ad753a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.224897927Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=6b0f1592-aaff-45fe-bce3-f0eb6ad753a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.224930891Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=6b0f1592-aaff-45fe-bce3-f0eb6ad753a1 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.225583501Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=1df1a4b2-ea19-414d-8c10-6929bb64a299 name=/runtime.v1.ImageService/PullImage
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.235151104Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.669092048Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=1df1a4b2-ea19-414d-8c10-6929bb64a299 name=/runtime.v1.ImageService/PullImage
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.669732403Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=0f6327a4-0076-40bc-ae31-29ecb81f60e0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.671567269Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=1db872c7-f9f9-4450-9d34-e0074414c2f0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.674951352Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-d9jtf/hello-world-app" id=76db4a4e-ec73-4de6-886c-e4847e67a7d3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.675057758Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.68091443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.681068618Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2b0ec274456948fd8dcb1d387733232cbcc9bcc2bbd35cd2394c57977c5e7879/merged/etc/passwd: no such file or directory"
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.681091914Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2b0ec274456948fd8dcb1d387733232cbcc9bcc2bbd35cd2394c57977c5e7879/merged/etc/group: no such file or directory"
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.681380904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.720927275Z" level=info msg="Created container 22cc9129ef0b2801bb3bb5278e81e6d2775045300c6e4db006dc7996458852e0: default/hello-world-app-5d498dc89-d9jtf/hello-world-app" id=76db4a4e-ec73-4de6-886c-e4847e67a7d3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.721635377Z" level=info msg="Starting container: 22cc9129ef0b2801bb3bb5278e81e6d2775045300c6e4db006dc7996458852e0" id=c608aa1e-c3c6-4804-9211-6502d4142435 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 08:33:49 addons-916029 crio[778]: time="2025-12-13T08:33:49.7241152Z" level=info msg="Started container" PID=9534 containerID=22cc9129ef0b2801bb3bb5278e81e6d2775045300c6e4db006dc7996458852e0 description=default/hello-world-app-5d498dc89-d9jtf/hello-world-app id=c608aa1e-c3c6-4804-9211-6502d4142435 name=/runtime.v1.RuntimeService/StartContainer sandboxID=18bef3c56d9154b94a5485e3741adab5048e20b319654d73afc681d0aa029020
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	22cc9129ef0b2       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   18bef3c56d915       hello-world-app-5d498dc89-d9jtf             default
	1eacd1a75322b       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   9333a7d801d60       registry-creds-764b6fb674-vj2wj             kube-system
	a671e17ea6cb1       public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c                                           2 minutes ago            Running             nginx                                    0                   08ad56b3ed0d2       nginx                                       default
	b2a20bd000d24       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   421c33a2994e6       busybox                                     default
	e6db23cda0e2e       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             2 minutes ago            Running             controller                               0                   2f3afb1dcee5f       ingress-nginx-controller-85d4c799dd-mh7x6   ingress-nginx
	4bb25dd81c054       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   607392f7905ed       csi-hostpathplugin-btrm5                    kube-system
	698d7bca1539f       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   607392f7905ed       csi-hostpathplugin-btrm5                    kube-system
	e4fc393aefc08       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   607392f7905ed       csi-hostpathplugin-btrm5                    kube-system
	b4d11fbe011a5       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   607392f7905ed       csi-hostpathplugin-btrm5                    kube-system
	1adfcaf697f26       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            2 minutes ago            Running             gadget                                   0                   f6705a881897d       gadget-f6967                                gadget
	946292da75844       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             2 minutes ago            Exited              patch                                    2                   79c87d0ba134e       ingress-nginx-admission-patch-hwg9m         ingress-nginx
	e74c5d67f2c68       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   607392f7905ed       csi-hostpathplugin-btrm5                    kube-system
	550d4b374da52       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago            Running             gcp-auth                                 0                   c1091482eca05       gcp-auth-78565c9fb4-trzh2                   gcp-auth
	9af098f389792       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago            Running             registry-proxy                           0                   2c68489e2f00d       registry-proxy-cd6hw                        kube-system
	64bbaac2488ef       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   607392f7905ed       csi-hostpathplugin-btrm5                    kube-system
	8c199d50a7ce0       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   cdf0b4efaaf9f       amd-gpu-device-plugin-vwtp8                 kube-system
	d4e6af613c7f6       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   830d3b663457b       local-path-provisioner-648f6765c9-kl88n     local-path-storage
	fde0db8ec43cf       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   7224a44197102       nvidia-device-plugin-daemonset-ss6tf        kube-system
	e6ccab73be6a9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago            Exited              create                                   0                   b7d0583cf0d44       ingress-nginx-admission-create-k5s7l        ingress-nginx
	fe444a659a1b4       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   440e12a30501e       metrics-server-85b7d694d7-zrm5x             kube-system
	fa1c6645c83e0       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   d60cc7958cb9f       csi-hostpath-resizer-0                      kube-system
	73d0270b822b7       docker.io/marcnuri/yakd@sha256:ef51bed688eb0feab1405f97b7286dfe1da3c61e5a189ce4ae34a90c9f9cf8aa                                              3 minutes ago            Running             yakd                                     0                   7c96883ed6893       yakd-dashboard-6654c87f9b-z5th7             yakd-dashboard
	ad874af797e83       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   70bec611eeeca       csi-hostpath-attacher-0                     kube-system
	7ca9d8ef02322       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   58a3d0fa25fb2       snapshot-controller-7d9fbc56b8-wfsd6        kube-system
	2932c3775c90e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   bc3b1ea1c391d       snapshot-controller-7d9fbc56b8-4d65q        kube-system
	66396849dd6c6       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   a4c2fb473d934       registry-6b586f9694-xvfhz                   kube-system
	f41a46f1596b5       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   237ba51eb498a       cloud-spanner-emulator-5bdddb765-gtmzw      default
	3c9c834fe19b9       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   e6b3b097d0715       kube-ingress-dns-minikube                   kube-system
	fe903b8244ca7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   2c8fd1fb4e8bd       coredns-66bc5c9577-lp9sl                    kube-system
	f0ce98858d71b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   a39dd6791e5cc       storage-provisioner                         kube-system
	dc46446aa2f04       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             4 minutes ago            Running             kube-proxy                               0                   e5ddd168c148a       kube-proxy-kr7zc                            kube-system
	ee8f73e803fab       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   9cf5b1100e741       kindnet-qpw8x                               kube-system
	8c1744d0402c3       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             4 minutes ago            Running             kube-apiserver                           0                   8de7a85804b11       kube-apiserver-addons-916029                kube-system
	1814222a5735c       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             4 minutes ago            Running             kube-controller-manager                  0                   b11095866ee3e       kube-controller-manager-addons-916029       kube-system
	ff2d7aaca1ac9       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             4 minutes ago            Running             kube-scheduler                           0                   6f59e9719337c       kube-scheduler-addons-916029                kube-system
	ea0dc0efd19f5       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             4 minutes ago            Running             etcd                                     0                   e6108698a27b8       etcd-addons-916029                          kube-system
	
	
	==> coredns [fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376] <==
	[INFO] 10.244.0.20:46843 - 56153 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118744s
	[INFO] 10.244.0.20:38409 - 3935 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004710858s
	[INFO] 10.244.0.20:37022 - 25180 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.006202584s
	[INFO] 10.244.0.20:57701 - 9202 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004765676s
	[INFO] 10.244.0.20:34781 - 20197 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00628365s
	[INFO] 10.244.0.20:51977 - 38486 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004408159s
	[INFO] 10.244.0.20:53836 - 47143 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005625804s
	[INFO] 10.244.0.20:41624 - 31056 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001032431s
	[INFO] 10.244.0.20:48401 - 48285 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002722073s
	[INFO] 10.244.0.27:59798 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00026659s
	[INFO] 10.244.0.27:54232 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000174328s
	[INFO] 10.244.0.29:55543 - 21397 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000201964s
	[INFO] 10.244.0.29:42244 - 58519 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000267203s
	[INFO] 10.244.0.29:46698 - 40880 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000108008s
	[INFO] 10.244.0.29:47410 - 45403 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000135431s
	[INFO] 10.244.0.29:55411 - 41967 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000093063s
	[INFO] 10.244.0.29:33729 - 60445 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000143157s
	[INFO] 10.244.0.29:36614 - 27384 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.005462507s
	[INFO] 10.244.0.29:42554 - 38772 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.005786871s
	[INFO] 10.244.0.29:42960 - 14712 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005651373s
	[INFO] 10.244.0.29:48848 - 42404 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.006131025s
	[INFO] 10.244.0.29:51371 - 50039 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.003301276s
	[INFO] 10.244.0.29:46299 - 16266 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005082786s
	[INFO] 10.244.0.29:50772 - 3368 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001664431s
	[INFO] 10.244.0.29:39138 - 7938 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001734204s
	
	
	==> describe nodes <==
	Name:               addons-916029
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-916029
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=addons-916029
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T08_29_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-916029
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-916029"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 08:29:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-916029
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 08:33:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 08:33:42 +0000   Sat, 13 Dec 2025 08:29:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 08:33:42 +0000   Sat, 13 Dec 2025 08:29:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 08:33:42 +0000   Sat, 13 Dec 2025 08:29:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 08:33:42 +0000   Sat, 13 Dec 2025 08:30:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-916029
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                e183f2ea-5441-4130-a280-3a2146a78b75
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  default                     cloud-spanner-emulator-5bdddb765-gtmzw       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  default                     hello-world-app-5d498dc89-d9jtf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-f6967                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  gcp-auth                    gcp-auth-78565c9fb4-trzh2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-mh7x6    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m7s
	  kube-system                 amd-gpu-device-plugin-vwtp8                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 coredns-66bc5c9577-lp9sl                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m8s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 csi-hostpathplugin-btrm5                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 etcd-addons-916029                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m14s
	  kube-system                 kindnet-qpw8x                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m9s
	  kube-system                 kube-apiserver-addons-916029                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-controller-manager-addons-916029        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-kr7zc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-scheduler-addons-916029                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 metrics-server-85b7d694d7-zrm5x              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m7s
	  kube-system                 nvidia-device-plugin-daemonset-ss6tf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 registry-6b586f9694-xvfhz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 registry-creds-764b6fb674-vj2wj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 registry-proxy-cd6hw                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 snapshot-controller-7d9fbc56b8-4d65q         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 snapshot-controller-7d9fbc56b8-wfsd6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  local-path-storage          local-path-provisioner-648f6765c9-kl88n      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-z5th7              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  Starting                 4m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m19s (x8 over 4m19s)  kubelet          Node addons-916029 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s (x8 over 4m19s)  kubelet          Node addons-916029 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s (x8 over 4m19s)  kubelet          Node addons-916029 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s                  kubelet          Node addons-916029 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s                  kubelet          Node addons-916029 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s                  kubelet          Node addons-916029 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m9s                   node-controller  Node addons-916029 event: Registered Node addons-916029 in Controller
	  Normal  NodeReady                3m26s                  kubelet          Node addons-916029 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.083084] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023653] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.640510] kauditd_printk_skb: 47 callbacks suppressed
	[Dec13 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +1.043569] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +1.023846] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +1.023869] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +1.023889] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +2.047766] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +4.031542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +8.511095] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 08:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[ +32.252585] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	
	
	==> etcd [ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90] <==
	{"level":"warn","ts":"2025-12-13T08:29:33.536197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.542426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.559674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.566878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.573683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.580647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.587525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.593705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.600114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.606899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.613252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.619455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.626147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.632054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.647464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.653785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.659855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.704712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:44.780673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:44.788069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:30:11.079696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:30:11.086234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:30:11.113220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:30:11.125306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56792","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T08:30:52.076066Z","caller":"traceutil/trace.go:172","msg":"trace[289190147] transaction","detail":"{read_only:false; response_revision:1192; number_of_response:1; }","duration":"114.785168ms","start":"2025-12-13T08:30:51.961262Z","end":"2025-12-13T08:30:52.076047Z","steps":["trace[289190147] 'process raft request'  (duration: 62.978384ms)","trace[289190147] 'compare'  (duration: 51.706949ms)"],"step_count":2}
	
	
	==> gcp-auth [550d4b374da5260a2999b860be3bc240d05ff59fd92ea2272f4a911eaf79e79a] <==
	2025/12/13 08:30:49 GCP Auth Webhook started!
	2025/12/13 08:31:03 Ready to marshal response ...
	2025/12/13 08:31:03 Ready to write response ...
	2025/12/13 08:31:03 Ready to marshal response ...
	2025/12/13 08:31:03 Ready to write response ...
	2025/12/13 08:31:03 Ready to marshal response ...
	2025/12/13 08:31:03 Ready to write response ...
	2025/12/13 08:31:12 Ready to marshal response ...
	2025/12/13 08:31:12 Ready to write response ...
	2025/12/13 08:31:12 Ready to marshal response ...
	2025/12/13 08:31:12 Ready to write response ...
	2025/12/13 08:31:19 Ready to marshal response ...
	2025/12/13 08:31:19 Ready to write response ...
	2025/12/13 08:31:22 Ready to marshal response ...
	2025/12/13 08:31:22 Ready to write response ...
	2025/12/13 08:31:25 Ready to marshal response ...
	2025/12/13 08:31:25 Ready to write response ...
	2025/12/13 08:31:34 Ready to marshal response ...
	2025/12/13 08:31:34 Ready to write response ...
	2025/12/13 08:31:50 Ready to marshal response ...
	2025/12/13 08:31:50 Ready to write response ...
	2025/12/13 08:33:48 Ready to marshal response ...
	2025/12/13 08:33:48 Ready to write response ...
	
	
	==> kernel <==
	 08:33:50 up 16 min,  0 user,  load average: 0.32, 0.65, 0.34
	Linux addons-916029 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d] <==
	I1213 08:31:43.848720       1 main.go:301] handling current node
	I1213 08:31:53.849549       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 08:31:53.849590       1 main.go:301] handling current node
	I1213 08:32:03.851953       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 08:32:03.852009       1 main.go:301] handling current node
	I1213 08:32:13.849259       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 08:32:13.849293       1 main.go:301] handling current node
	I1213 08:32:23.849167       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 08:32:23.849205       1 main.go:301] handling current node
	I1213 08:32:33.851298       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 08:32:33.851331       1 main.go:301] handling current node
	I1213 08:32:43.848888       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 08:32:43.848927       1 main.go:301] handling current node
	I1213 08:32:53.848805       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 08:32:53.848852       1 main.go:301] handling current node
	I1213 08:33:03.851350       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 08:33:03.851382       1 main.go:301] handling current node
	I1213 08:33:13.855078       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 08:33:13.855118       1 main.go:301] handling current node
	I1213 08:33:23.851610       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 08:33:23.851657       1 main.go:301] handling current node
	I1213 08:33:33.849563       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 08:33:33.849592       1 main.go:301] handling current node
	I1213 08:33:43.848806       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 08:33:43.848842       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0] <==
	 > logger="UnhandledError"
	E1213 08:30:41.863763       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.149.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.149.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.149.29:443: connect: connection refused" logger="UnhandledError"
	E1213 08:30:41.869152       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.149.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.149.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.149.29:443: connect: connection refused" logger="UnhandledError"
	W1213 08:30:42.865414       1 handler_proxy.go:99] no RequestInfo found in the context
	W1213 08:30:42.865462       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 08:30:42.865518       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1213 08:30:42.865534       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1213 08:30:42.865535       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 08:30:42.866657       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1213 08:30:46.895307       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.149.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.149.29:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1213 08:30:46.895374       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 08:30:46.895405       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 08:30:46.905306       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1213 08:31:11.636158       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38160: use of closed network connection
	E1213 08:31:11.781330       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38198: use of closed network connection
	I1213 08:31:25.366759       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1213 08:31:25.580155       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.46.111"}
	I1213 08:31:41.890002       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1213 08:33:48.969566       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.224.65"}
	
	
	==> kube-controller-manager [1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955] <==
	I1213 08:29:41.066971       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 08:29:41.067015       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 08:29:41.067042       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 08:29:41.067017       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 08:29:41.067080       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 08:29:41.067080       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 08:29:41.067135       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 08:29:41.067285       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 08:29:41.070834       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 08:29:41.070840       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 08:29:41.070936       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 08:29:41.076184       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 08:29:41.081359       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 08:29:41.089841       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1213 08:29:43.449635       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1213 08:30:11.074774       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 08:30:11.074899       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1213 08:30:11.074935       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1213 08:30:11.097035       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1213 08:30:11.100231       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1213 08:30:11.175897       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 08:30:11.201251       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 08:30:26.023099       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1213 08:30:41.181382       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 08:30:41.208157       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535] <==
	I1213 08:29:43.424758       1 server_linux.go:53] "Using iptables proxy"
	I1213 08:29:43.589388       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 08:29:43.690372       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 08:29:43.690399       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 08:29:43.690507       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 08:29:43.712616       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 08:29:43.712679       1 server_linux.go:132] "Using iptables Proxier"
	I1213 08:29:43.719167       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 08:29:43.723857       1 server.go:527] "Version info" version="v1.34.2"
	I1213 08:29:43.723882       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 08:29:43.726163       1 config.go:200] "Starting service config controller"
	I1213 08:29:43.726172       1 config.go:106] "Starting endpoint slice config controller"
	I1213 08:29:43.726181       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 08:29:43.726181       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 08:29:43.726273       1 config.go:309] "Starting node config controller"
	I1213 08:29:43.726285       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 08:29:43.726436       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 08:29:43.726444       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 08:29:43.826332       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 08:29:43.826406       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 08:29:43.826441       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 08:29:43.826771       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0] <==
	E1213 08:29:34.093910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 08:29:34.093957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 08:29:34.093767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 08:29:34.096992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 08:29:34.097204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 08:29:34.097572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 08:29:34.097587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 08:29:34.098182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 08:29:34.098820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 08:29:34.098954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 08:29:34.098993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 08:29:34.099010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 08:29:34.099171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 08:29:34.099410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 08:29:34.099423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 08:29:35.006627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 08:29:35.075613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 08:29:35.082619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 08:29:35.115034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 08:29:35.126894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 08:29:35.156924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 08:29:35.157665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 08:29:35.169779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 08:29:35.201998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1213 08:29:35.590718       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 08:31:58 addons-916029 kubelet[1282]: I1213 08:31:58.304806    1282 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-22b18686-6234-4045-870c-b66bab9f6785\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa\") on node \"addons-916029\" "
	Dec 13 08:31:58 addons-916029 kubelet[1282]: I1213 08:31:58.304859    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lgjck\" (UniqueName: \"kubernetes.io/projected/e39ef3e7-3906-4a0f-835a-5e408b485eb6-kube-api-access-lgjck\") on node \"addons-916029\" DevicePath \"\""
	Dec 13 08:31:58 addons-916029 kubelet[1282]: E1213 08:31:58.309809    1282 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa podName: nodeName:}" failed. No retries permitted until 2025-12-13 08:31:58.809787213 +0000 UTC m=+142.275729725 (durationBeforeRetry 500ms). Error: UnmountDevice failed for volume "pvc-22b18686-6234-4045-870c-b66bab9f6785" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa") on node "addons-916029" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 2b6a1146-d7fe-11f0-8c4e-ce0035854eaa does not exist in the volumes list
	Dec 13 08:31:58 addons-916029 kubelet[1282]: I1213 08:31:58.613749    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39ef3e7-3906-4a0f-835a-5e408b485eb6" path="/var/lib/kubelet/pods/e39ef3e7-3906-4a0f-835a-5e408b485eb6/volumes"
	Dec 13 08:31:58 addons-916029 kubelet[1282]: I1213 08:31:58.909765    1282 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-22b18686-6234-4045-870c-b66bab9f6785\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa\") on node \"addons-916029\" "
	Dec 13 08:31:58 addons-916029 kubelet[1282]: E1213 08:31:58.914374    1282 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa podName: nodeName:}" failed. No retries permitted until 2025-12-13 08:31:59.914357209 +0000 UTC m=+143.380299720 (durationBeforeRetry 1s). Error: UnmountDevice failed for volume "pvc-22b18686-6234-4045-870c-b66bab9f6785" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa") on node "addons-916029" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 2b6a1146-d7fe-11f0-8c4e-ce0035854eaa does not exist in the volumes list
	Dec 13 08:31:59 addons-916029 kubelet[1282]: I1213 08:31:59.917085    1282 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-22b18686-6234-4045-870c-b66bab9f6785\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa\") on node \"addons-916029\" "
	Dec 13 08:31:59 addons-916029 kubelet[1282]: E1213 08:31:59.922077    1282 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa podName: nodeName:}" failed. No retries permitted until 2025-12-13 08:32:01.92205545 +0000 UTC m=+145.387997965 (durationBeforeRetry 2s). Error: UnmountDevice failed for volume "pvc-22b18686-6234-4045-870c-b66bab9f6785" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa") on node "addons-916029" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 2b6a1146-d7fe-11f0-8c4e-ce0035854eaa does not exist in the volumes list
	Dec 13 08:32:01 addons-916029 kubelet[1282]: I1213 08:32:01.931032    1282 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-22b18686-6234-4045-870c-b66bab9f6785\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa\") on node \"addons-916029\" "
	Dec 13 08:32:01 addons-916029 kubelet[1282]: E1213 08:32:01.938476    1282 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa podName: nodeName:}" failed. No retries permitted until 2025-12-13 08:32:05.93845611 +0000 UTC m=+149.404398620 (durationBeforeRetry 4s). Error: UnmountDevice failed for volume "pvc-22b18686-6234-4045-870c-b66bab9f6785" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa") on node "addons-916029" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 2b6a1146-d7fe-11f0-8c4e-ce0035854eaa does not exist in the volumes list
	Dec 13 08:32:05 addons-916029 kubelet[1282]: I1213 08:32:05.963258    1282 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-22b18686-6234-4045-870c-b66bab9f6785\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa\") on node \"addons-916029\" "
	Dec 13 08:32:05 addons-916029 kubelet[1282]: E1213 08:32:05.967697    1282 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa podName: nodeName:}" failed. No retries permitted until 2025-12-13 08:32:13.967674482 +0000 UTC m=+157.433617002 (durationBeforeRetry 8s). Error: UnmountDevice failed for volume "pvc-22b18686-6234-4045-870c-b66bab9f6785" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa") on node "addons-916029" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 2b6a1146-d7fe-11f0-8c4e-ce0035854eaa does not exist in the volumes list
	Dec 13 08:32:12 addons-916029 kubelet[1282]: I1213 08:32:12.611159    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cd6hw" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 08:32:14 addons-916029 kubelet[1282]: I1213 08:32:14.025727    1282 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-22b18686-6234-4045-870c-b66bab9f6785\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa\") on node \"addons-916029\" "
	Dec 13 08:32:14 addons-916029 kubelet[1282]: E1213 08:32:14.029696    1282 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa podName: nodeName:}" failed. No retries permitted until 2025-12-13 08:32:30.029669765 +0000 UTC m=+173.495612277 (durationBeforeRetry 16s). Error: UnmountDevice failed for volume "pvc-22b18686-6234-4045-870c-b66bab9f6785" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa") on node "addons-916029" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 2b6a1146-d7fe-11f0-8c4e-ce0035854eaa does not exist in the volumes list
	Dec 13 08:32:30 addons-916029 kubelet[1282]: I1213 08:32:30.046334    1282 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-22b18686-6234-4045-870c-b66bab9f6785\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa\") on node \"addons-916029\" "
	Dec 13 08:32:30 addons-916029 kubelet[1282]: E1213 08:32:30.050540    1282 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa podName: nodeName:}" failed. No retries permitted until 2025-12-13 08:33:02.050513673 +0000 UTC m=+205.516456188 (durationBeforeRetry 32s). Error: UnmountDevice failed for volume "pvc-22b18686-6234-4045-870c-b66bab9f6785" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa") on node "addons-916029" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 2b6a1146-d7fe-11f0-8c4e-ce0035854eaa does not exist in the volumes list
	Dec 13 08:32:59 addons-916029 kubelet[1282]: I1213 08:32:59.610229    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-vwtp8" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 08:33:00 addons-916029 kubelet[1282]: I1213 08:33:00.610937    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-ss6tf" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 08:33:02 addons-916029 kubelet[1282]: I1213 08:33:02.067683    1282 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-22b18686-6234-4045-870c-b66bab9f6785\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa\") on node \"addons-916029\" "
	Dec 13 08:33:02 addons-916029 kubelet[1282]: E1213 08:33:02.071824    1282 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa podName: nodeName:}" failed. No retries permitted until 2025-12-13 08:34:06.071801606 +0000 UTC m=+269.537744112 (durationBeforeRetry 1m4s). Error: UnmountDevice failed for volume "pvc-22b18686-6234-4045-870c-b66bab9f6785" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^2b6a1146-d7fe-11f0-8c4e-ce0035854eaa") on node "addons-916029" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = NotFound desc = volume id 2b6a1146-d7fe-11f0-8c4e-ce0035854eaa does not exist in the volumes list
	Dec 13 08:33:37 addons-916029 kubelet[1282]: I1213 08:33:37.610158    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cd6hw" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 08:33:49 addons-916029 kubelet[1282]: I1213 08:33:49.011724    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k46kq\" (UniqueName: \"kubernetes.io/projected/cf9b1b0a-b274-46b3-a1a6-a0c48389a811-kube-api-access-k46kq\") pod \"hello-world-app-5d498dc89-d9jtf\" (UID: \"cf9b1b0a-b274-46b3-a1a6-a0c48389a811\") " pod="default/hello-world-app-5d498dc89-d9jtf"
	Dec 13 08:33:49 addons-916029 kubelet[1282]: I1213 08:33:49.011780    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cf9b1b0a-b274-46b3-a1a6-a0c48389a811-gcp-creds\") pod \"hello-world-app-5d498dc89-d9jtf\" (UID: \"cf9b1b0a-b274-46b3-a1a6-a0c48389a811\") " pod="default/hello-world-app-5d498dc89-d9jtf"
	Dec 13 08:33:50 addons-916029 kubelet[1282]: I1213 08:33:50.626472    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-d9jtf" podStartSLOduration=2.180768578 podStartE2EDuration="2.626452728s" podCreationTimestamp="2025-12-13 08:33:48 +0000 UTC" firstStartedPulling="2025-12-13 08:33:49.225242836 +0000 UTC m=+252.691185335" lastFinishedPulling="2025-12-13 08:33:49.670926972 +0000 UTC m=+253.136869485" observedRunningTime="2025-12-13 08:33:50.625405964 +0000 UTC m=+254.091348506" watchObservedRunningTime="2025-12-13 08:33:50.626452728 +0000 UTC m=+254.092395248"
	
	
	==> storage-provisioner [f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c] <==
	W1213 08:33:25.650343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:27.652934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:27.656767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:29.660092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:29.664630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:31.667294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:31.672170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:33.675933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:33.680022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:35.683229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:35.687835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:37.690986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:37.694593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:39.697131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:39.700559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:41.703187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:41.706563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:43.708953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:43.712364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:45.714937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:45.718371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:47.720755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:47.726002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:49.729660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:33:49.734699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-916029 -n addons-916029
helpers_test.go:270: (dbg) Run:  kubectl --context addons-916029 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-k5s7l ingress-nginx-admission-patch-hwg9m
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-916029 describe pod ingress-nginx-admission-create-k5s7l ingress-nginx-admission-patch-hwg9m
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-916029 describe pod ingress-nginx-admission-create-k5s7l ingress-nginx-admission-patch-hwg9m: exit status 1 (55.465737ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-k5s7l" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-hwg9m" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-916029 describe pod ingress-nginx-admission-create-k5s7l ingress-nginx-admission-patch-hwg9m: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-916029 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (238.791279ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:33:51.314451   25790 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:33:51.314718   25790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:33:51.314728   25790 out.go:374] Setting ErrFile to fd 2...
	I1213 08:33:51.314733   25790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:33:51.314930   25790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:33:51.315228   25790 mustload.go:66] Loading cluster: addons-916029
	I1213 08:33:51.315560   25790 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:33:51.315579   25790 addons.go:622] checking whether the cluster is paused
	I1213 08:33:51.315657   25790 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:33:51.315669   25790 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:33:51.316063   25790 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:33:51.333522   25790 ssh_runner.go:195] Run: systemctl --version
	I1213 08:33:51.333567   25790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:33:51.351052   25790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:33:51.445982   25790 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:33:51.446085   25790 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:33:51.475653   25790 cri.go:89] found id: "1eacd1a75322bf66d02f5aa125525605bc7d40fb432d167238ec6b0a3bdbf941"
	I1213 08:33:51.475673   25790 cri.go:89] found id: "4bb25dd81c0541c58b998a5dd1e1d2fe7a443642ac20fb592394af3d836737d4"
	I1213 08:33:51.475676   25790 cri.go:89] found id: "698d7bca1539fb40aff53c0c6284f7f24e87951c737357b6c6498f4a0c2ccad9"
	I1213 08:33:51.475679   25790 cri.go:89] found id: "e4fc393aefc08283b1783fbbea7888ea26a275bfe57c0e05cd0d336b6ed4966e"
	I1213 08:33:51.475682   25790 cri.go:89] found id: "b4d11fbe011a5a6679dbdec4c460349ab7c99e5017e8334bb7893843af12fd12"
	I1213 08:33:51.475691   25790 cri.go:89] found id: "e74c5d67f2c68576df4d7df9e93a1012a053c238c27d92ad2331b9fa54a041cc"
	I1213 08:33:51.475694   25790 cri.go:89] found id: "9af098f389792a2ca60bfe746b36b7037affa3720d9bb625614e600d0b53f777"
	I1213 08:33:51.475697   25790 cri.go:89] found id: "64bbaac2488ef0e0b7e28fac6e2f2a932fb36905093e3287b2b5d0e89e837414"
	I1213 08:33:51.475700   25790 cri.go:89] found id: "8c199d50a7ce02f37c9f7ca0c310d99d98be3926d103644855fa912e30d253c8"
	I1213 08:33:51.475710   25790 cri.go:89] found id: "fde0db8ec43cfda390edee03eb437eb6a2b1d282c69405847fc4c41892df3581"
	I1213 08:33:51.475714   25790 cri.go:89] found id: "fe444a659a1b400f330d2f2a1de1e6b38fc3f00a0515e54e7d432c4581d6f457"
	I1213 08:33:51.475716   25790 cri.go:89] found id: "fa1c6645c83e0942a6fa2b033162596e626df4025e61894457e512273a89358b"
	I1213 08:33:51.475719   25790 cri.go:89] found id: "ad874af797e833520e1981601d5efeefcf313d735f57b7308ad40bb2d8ccb664"
	I1213 08:33:51.475723   25790 cri.go:89] found id: "7ca9d8ef0232224f51f796a5a42b289083e38e83786d57e118ca410ae35b9a12"
	I1213 08:33:51.475726   25790 cri.go:89] found id: "2932c3775c90ead574169f47ba6b14c61f79b3b703cc7d808599349d052c6874"
	I1213 08:33:51.475733   25790 cri.go:89] found id: "66396849dd6c6c91e526433d374f5d0d1144295399e1fad4c6d195ee2f8c97dd"
	I1213 08:33:51.475738   25790 cri.go:89] found id: "3c9c834fe19b9b803891c5f93368f927b1f2634d843c5c238f3ff7eafcc9af21"
	I1213 08:33:51.475750   25790 cri.go:89] found id: "fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376"
	I1213 08:33:51.475753   25790 cri.go:89] found id: "f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c"
	I1213 08:33:51.475756   25790 cri.go:89] found id: "dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535"
	I1213 08:33:51.475759   25790 cri.go:89] found id: "ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d"
	I1213 08:33:51.475764   25790 cri.go:89] found id: "8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0"
	I1213 08:33:51.475767   25790 cri.go:89] found id: "1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955"
	I1213 08:33:51.475774   25790 cri.go:89] found id: "ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0"
	I1213 08:33:51.475780   25790 cri.go:89] found id: "ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90"
	I1213 08:33:51.475783   25790 cri.go:89] found id: ""
	I1213 08:33:51.475818   25790 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 08:33:51.489634   25790 out.go:203] 
	W1213 08:33:51.490797   25790 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:33:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:33:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 08:33:51.490821   25790 out.go:285] * 
	* 
	W1213 08:33:51.493925   25790 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:33:51.495296   25790 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-916029 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-916029 addons disable ingress --alsologtostderr -v=1: exit status 11 (239.273124ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:33:51.555467   25852 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:33:51.555633   25852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:33:51.555643   25852 out.go:374] Setting ErrFile to fd 2...
	I1213 08:33:51.555647   25852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:33:51.555878   25852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:33:51.556161   25852 mustload.go:66] Loading cluster: addons-916029
	I1213 08:33:51.556556   25852 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:33:51.556583   25852 addons.go:622] checking whether the cluster is paused
	I1213 08:33:51.556716   25852 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:33:51.556734   25852 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:33:51.557159   25852 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:33:51.574906   25852 ssh_runner.go:195] Run: systemctl --version
	I1213 08:33:51.574964   25852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:33:51.591691   25852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:33:51.686072   25852 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:33:51.686171   25852 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:33:51.715475   25852 cri.go:89] found id: "1eacd1a75322bf66d02f5aa125525605bc7d40fb432d167238ec6b0a3bdbf941"
	I1213 08:33:51.715515   25852 cri.go:89] found id: "4bb25dd81c0541c58b998a5dd1e1d2fe7a443642ac20fb592394af3d836737d4"
	I1213 08:33:51.715521   25852 cri.go:89] found id: "698d7bca1539fb40aff53c0c6284f7f24e87951c737357b6c6498f4a0c2ccad9"
	I1213 08:33:51.715526   25852 cri.go:89] found id: "e4fc393aefc08283b1783fbbea7888ea26a275bfe57c0e05cd0d336b6ed4966e"
	I1213 08:33:51.715531   25852 cri.go:89] found id: "b4d11fbe011a5a6679dbdec4c460349ab7c99e5017e8334bb7893843af12fd12"
	I1213 08:33:51.715536   25852 cri.go:89] found id: "e74c5d67f2c68576df4d7df9e93a1012a053c238c27d92ad2331b9fa54a041cc"
	I1213 08:33:51.715540   25852 cri.go:89] found id: "9af098f389792a2ca60bfe746b36b7037affa3720d9bb625614e600d0b53f777"
	I1213 08:33:51.715544   25852 cri.go:89] found id: "64bbaac2488ef0e0b7e28fac6e2f2a932fb36905093e3287b2b5d0e89e837414"
	I1213 08:33:51.715548   25852 cri.go:89] found id: "8c199d50a7ce02f37c9f7ca0c310d99d98be3926d103644855fa912e30d253c8"
	I1213 08:33:51.715555   25852 cri.go:89] found id: "fde0db8ec43cfda390edee03eb437eb6a2b1d282c69405847fc4c41892df3581"
	I1213 08:33:51.715560   25852 cri.go:89] found id: "fe444a659a1b400f330d2f2a1de1e6b38fc3f00a0515e54e7d432c4581d6f457"
	I1213 08:33:51.715564   25852 cri.go:89] found id: "fa1c6645c83e0942a6fa2b033162596e626df4025e61894457e512273a89358b"
	I1213 08:33:51.715571   25852 cri.go:89] found id: "ad874af797e833520e1981601d5efeefcf313d735f57b7308ad40bb2d8ccb664"
	I1213 08:33:51.715576   25852 cri.go:89] found id: "7ca9d8ef0232224f51f796a5a42b289083e38e83786d57e118ca410ae35b9a12"
	I1213 08:33:51.715582   25852 cri.go:89] found id: "2932c3775c90ead574169f47ba6b14c61f79b3b703cc7d808599349d052c6874"
	I1213 08:33:51.715590   25852 cri.go:89] found id: "66396849dd6c6c91e526433d374f5d0d1144295399e1fad4c6d195ee2f8c97dd"
	I1213 08:33:51.715595   25852 cri.go:89] found id: "3c9c834fe19b9b803891c5f93368f927b1f2634d843c5c238f3ff7eafcc9af21"
	I1213 08:33:51.715601   25852 cri.go:89] found id: "fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376"
	I1213 08:33:51.715607   25852 cri.go:89] found id: "f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c"
	I1213 08:33:51.715611   25852 cri.go:89] found id: "dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535"
	I1213 08:33:51.715621   25852 cri.go:89] found id: "ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d"
	I1213 08:33:51.715624   25852 cri.go:89] found id: "8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0"
	I1213 08:33:51.715628   25852 cri.go:89] found id: "1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955"
	I1213 08:33:51.715633   25852 cri.go:89] found id: "ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0"
	I1213 08:33:51.715638   25852 cri.go:89] found id: "ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90"
	I1213 08:33:51.715642   25852 cri.go:89] found id: ""
	I1213 08:33:51.715690   25852 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 08:33:51.729174   25852 out.go:203] 
	W1213 08:33:51.730416   25852 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:33:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:33:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 08:33:51.730436   25852 out.go:285] * 
	* 
	W1213 08:33:51.733352   25852 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:33:51.734538   25852 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-916029 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-f6967" [58314269-6574-473f-b6d3-afbeecb36b9d] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003468454s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-916029 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (242.339781ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:31:30.570407   22759 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:31:30.570581   22759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:30.570591   22759 out.go:374] Setting ErrFile to fd 2...
	I1213 08:31:30.570595   22759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:30.570802   22759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:31:30.571066   22759 mustload.go:66] Loading cluster: addons-916029
	I1213 08:31:30.571384   22759 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:30.571402   22759 addons.go:622] checking whether the cluster is paused
	I1213 08:31:30.571482   22759 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:30.571516   22759 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:31:30.571872   22759 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:31:30.591183   22759 ssh_runner.go:195] Run: systemctl --version
	I1213 08:31:30.591238   22759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:31:30.609214   22759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:31:30.706081   22759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:31:30.706168   22759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:31:30.735636   22759 cri.go:89] found id: "1eacd1a75322bf66d02f5aa125525605bc7d40fb432d167238ec6b0a3bdbf941"
	I1213 08:31:30.735663   22759 cri.go:89] found id: "4bb25dd81c0541c58b998a5dd1e1d2fe7a443642ac20fb592394af3d836737d4"
	I1213 08:31:30.735667   22759 cri.go:89] found id: "698d7bca1539fb40aff53c0c6284f7f24e87951c737357b6c6498f4a0c2ccad9"
	I1213 08:31:30.735671   22759 cri.go:89] found id: "e4fc393aefc08283b1783fbbea7888ea26a275bfe57c0e05cd0d336b6ed4966e"
	I1213 08:31:30.735674   22759 cri.go:89] found id: "b4d11fbe011a5a6679dbdec4c460349ab7c99e5017e8334bb7893843af12fd12"
	I1213 08:31:30.735677   22759 cri.go:89] found id: "e74c5d67f2c68576df4d7df9e93a1012a053c238c27d92ad2331b9fa54a041cc"
	I1213 08:31:30.735680   22759 cri.go:89] found id: "9af098f389792a2ca60bfe746b36b7037affa3720d9bb625614e600d0b53f777"
	I1213 08:31:30.735683   22759 cri.go:89] found id: "64bbaac2488ef0e0b7e28fac6e2f2a932fb36905093e3287b2b5d0e89e837414"
	I1213 08:31:30.735686   22759 cri.go:89] found id: "8c199d50a7ce02f37c9f7ca0c310d99d98be3926d103644855fa912e30d253c8"
	I1213 08:31:30.735692   22759 cri.go:89] found id: "fde0db8ec43cfda390edee03eb437eb6a2b1d282c69405847fc4c41892df3581"
	I1213 08:31:30.735695   22759 cri.go:89] found id: "fe444a659a1b400f330d2f2a1de1e6b38fc3f00a0515e54e7d432c4581d6f457"
	I1213 08:31:30.735698   22759 cri.go:89] found id: "fa1c6645c83e0942a6fa2b033162596e626df4025e61894457e512273a89358b"
	I1213 08:31:30.735701   22759 cri.go:89] found id: "ad874af797e833520e1981601d5efeefcf313d735f57b7308ad40bb2d8ccb664"
	I1213 08:31:30.735704   22759 cri.go:89] found id: "7ca9d8ef0232224f51f796a5a42b289083e38e83786d57e118ca410ae35b9a12"
	I1213 08:31:30.735707   22759 cri.go:89] found id: "2932c3775c90ead574169f47ba6b14c61f79b3b703cc7d808599349d052c6874"
	I1213 08:31:30.735713   22759 cri.go:89] found id: "66396849dd6c6c91e526433d374f5d0d1144295399e1fad4c6d195ee2f8c97dd"
	I1213 08:31:30.735720   22759 cri.go:89] found id: "3c9c834fe19b9b803891c5f93368f927b1f2634d843c5c238f3ff7eafcc9af21"
	I1213 08:31:30.735725   22759 cri.go:89] found id: "fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376"
	I1213 08:31:30.735728   22759 cri.go:89] found id: "f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c"
	I1213 08:31:30.735730   22759 cri.go:89] found id: "dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535"
	I1213 08:31:30.735735   22759 cri.go:89] found id: "ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d"
	I1213 08:31:30.735741   22759 cri.go:89] found id: "8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0"
	I1213 08:31:30.735743   22759 cri.go:89] found id: "1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955"
	I1213 08:31:30.735746   22759 cri.go:89] found id: "ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0"
	I1213 08:31:30.735752   22759 cri.go:89] found id: "ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90"
	I1213 08:31:30.735760   22759 cri.go:89] found id: ""
	I1213 08:31:30.735801   22759 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 08:31:30.749603   22759 out.go:203] 
	W1213 08:31:30.751023   22759 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 08:31:30.751045   22759 out.go:285] * 
	* 
	W1213 08:31:30.754103   22759 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:31:30.755351   22759 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-916029 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.237245ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-zrm5x" [b5b354f9-a97f-4873-8ef3-19058bdced38] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003607381s
addons_test.go:465: (dbg) Run:  kubectl --context addons-916029 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-916029 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (284.259001ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:31:25.297100   21562 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:31:25.297285   21562 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:25.297299   21562 out.go:374] Setting ErrFile to fd 2...
	I1213 08:31:25.297306   21562 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:25.297641   21562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:31:25.298009   21562 mustload.go:66] Loading cluster: addons-916029
	I1213 08:31:25.298587   21562 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:25.298621   21562 addons.go:622] checking whether the cluster is paused
	I1213 08:31:25.298766   21562 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:25.298787   21562 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:31:25.299360   21562 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:31:25.318571   21562 ssh_runner.go:195] Run: systemctl --version
	I1213 08:31:25.318622   21562 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:31:25.340165   21562 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:31:25.443826   21562 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:31:25.443913   21562 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:31:25.485282   21562 cri.go:89] found id: "4bb25dd81c0541c58b998a5dd1e1d2fe7a443642ac20fb592394af3d836737d4"
	I1213 08:31:25.485307   21562 cri.go:89] found id: "698d7bca1539fb40aff53c0c6284f7f24e87951c737357b6c6498f4a0c2ccad9"
	I1213 08:31:25.485312   21562 cri.go:89] found id: "e4fc393aefc08283b1783fbbea7888ea26a275bfe57c0e05cd0d336b6ed4966e"
	I1213 08:31:25.485316   21562 cri.go:89] found id: "b4d11fbe011a5a6679dbdec4c460349ab7c99e5017e8334bb7893843af12fd12"
	I1213 08:31:25.485319   21562 cri.go:89] found id: "e74c5d67f2c68576df4d7df9e93a1012a053c238c27d92ad2331b9fa54a041cc"
	I1213 08:31:25.485323   21562 cri.go:89] found id: "9af098f389792a2ca60bfe746b36b7037affa3720d9bb625614e600d0b53f777"
	I1213 08:31:25.485325   21562 cri.go:89] found id: "64bbaac2488ef0e0b7e28fac6e2f2a932fb36905093e3287b2b5d0e89e837414"
	I1213 08:31:25.485328   21562 cri.go:89] found id: "8c199d50a7ce02f37c9f7ca0c310d99d98be3926d103644855fa912e30d253c8"
	I1213 08:31:25.485331   21562 cri.go:89] found id: "fde0db8ec43cfda390edee03eb437eb6a2b1d282c69405847fc4c41892df3581"
	I1213 08:31:25.485341   21562 cri.go:89] found id: "fe444a659a1b400f330d2f2a1de1e6b38fc3f00a0515e54e7d432c4581d6f457"
	I1213 08:31:25.485344   21562 cri.go:89] found id: "fa1c6645c83e0942a6fa2b033162596e626df4025e61894457e512273a89358b"
	I1213 08:31:25.485347   21562 cri.go:89] found id: "ad874af797e833520e1981601d5efeefcf313d735f57b7308ad40bb2d8ccb664"
	I1213 08:31:25.485350   21562 cri.go:89] found id: "7ca9d8ef0232224f51f796a5a42b289083e38e83786d57e118ca410ae35b9a12"
	I1213 08:31:25.485353   21562 cri.go:89] found id: "2932c3775c90ead574169f47ba6b14c61f79b3b703cc7d808599349d052c6874"
	I1213 08:31:25.485356   21562 cri.go:89] found id: "66396849dd6c6c91e526433d374f5d0d1144295399e1fad4c6d195ee2f8c97dd"
	I1213 08:31:25.485360   21562 cri.go:89] found id: "3c9c834fe19b9b803891c5f93368f927b1f2634d843c5c238f3ff7eafcc9af21"
	I1213 08:31:25.485366   21562 cri.go:89] found id: "fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376"
	I1213 08:31:25.485371   21562 cri.go:89] found id: "f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c"
	I1213 08:31:25.485374   21562 cri.go:89] found id: "dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535"
	I1213 08:31:25.485376   21562 cri.go:89] found id: "ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d"
	I1213 08:31:25.485382   21562 cri.go:89] found id: "8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0"
	I1213 08:31:25.485384   21562 cri.go:89] found id: "1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955"
	I1213 08:31:25.485387   21562 cri.go:89] found id: "ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0"
	I1213 08:31:25.485389   21562 cri.go:89] found id: "ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90"
	I1213 08:31:25.485392   21562 cri.go:89] found id: ""
	I1213 08:31:25.485427   21562 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 08:31:25.500074   21562 out.go:203] 
	W1213 08:31:25.502354   21562 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 08:31:25.502370   21562 out.go:285] * 
	* 
	W1213 08:31:25.505967   21562 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:31:25.507508   21562 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-916029 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (36.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 08:31:22.535233    9303 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 08:31:22.538477    9303 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 08:31:22.538523    9303 kapi.go:107] duration metric: took 3.311235ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.32296ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-916029 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-916029 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [6da9aaf7-1dbb-4985-8a70-6be95bb04481] Pending
helpers_test.go:353: "task-pv-pod" [6da9aaf7-1dbb-4985-8a70-6be95bb04481] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [6da9aaf7-1dbb-4985-8a70-6be95bb04481] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003668166s
addons_test.go:574: (dbg) Run:  kubectl --context addons-916029 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-916029 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-916029 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-916029 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-916029 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-916029 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-916029 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [e39ef3e7-3906-4a0f-835a-5e408b485eb6] Pending
helpers_test.go:353: "task-pv-pod-restore" [e39ef3e7-3906-4a0f-835a-5e408b485eb6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [e39ef3e7-3906-4a0f-835a-5e408b485eb6] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003418232s
addons_test.go:616: (dbg) Run:  kubectl --context addons-916029 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-916029 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-916029 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-916029 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (240.024549ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:31:58.606983   23603 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:31:58.607296   23603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:58.607311   23603 out.go:374] Setting ErrFile to fd 2...
	I1213 08:31:58.607319   23603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:58.607669   23603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:31:58.607984   23603 mustload.go:66] Loading cluster: addons-916029
	I1213 08:31:58.608309   23603 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:58.608330   23603 addons.go:622] checking whether the cluster is paused
	I1213 08:31:58.608413   23603 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:58.608426   23603 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:31:58.608837   23603 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:31:58.627066   23603 ssh_runner.go:195] Run: systemctl --version
	I1213 08:31:58.627122   23603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:31:58.644253   23603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:31:58.740015   23603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:31:58.740122   23603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:31:58.768145   23603 cri.go:89] found id: "1eacd1a75322bf66d02f5aa125525605bc7d40fb432d167238ec6b0a3bdbf941"
	I1213 08:31:58.768170   23603 cri.go:89] found id: "4bb25dd81c0541c58b998a5dd1e1d2fe7a443642ac20fb592394af3d836737d4"
	I1213 08:31:58.768176   23603 cri.go:89] found id: "698d7bca1539fb40aff53c0c6284f7f24e87951c737357b6c6498f4a0c2ccad9"
	I1213 08:31:58.768181   23603 cri.go:89] found id: "e4fc393aefc08283b1783fbbea7888ea26a275bfe57c0e05cd0d336b6ed4966e"
	I1213 08:31:58.768186   23603 cri.go:89] found id: "b4d11fbe011a5a6679dbdec4c460349ab7c99e5017e8334bb7893843af12fd12"
	I1213 08:31:58.768191   23603 cri.go:89] found id: "e74c5d67f2c68576df4d7df9e93a1012a053c238c27d92ad2331b9fa54a041cc"
	I1213 08:31:58.768196   23603 cri.go:89] found id: "9af098f389792a2ca60bfe746b36b7037affa3720d9bb625614e600d0b53f777"
	I1213 08:31:58.768201   23603 cri.go:89] found id: "64bbaac2488ef0e0b7e28fac6e2f2a932fb36905093e3287b2b5d0e89e837414"
	I1213 08:31:58.768206   23603 cri.go:89] found id: "8c199d50a7ce02f37c9f7ca0c310d99d98be3926d103644855fa912e30d253c8"
	I1213 08:31:58.768219   23603 cri.go:89] found id: "fde0db8ec43cfda390edee03eb437eb6a2b1d282c69405847fc4c41892df3581"
	I1213 08:31:58.768224   23603 cri.go:89] found id: "fe444a659a1b400f330d2f2a1de1e6b38fc3f00a0515e54e7d432c4581d6f457"
	I1213 08:31:58.768229   23603 cri.go:89] found id: "fa1c6645c83e0942a6fa2b033162596e626df4025e61894457e512273a89358b"
	I1213 08:31:58.768234   23603 cri.go:89] found id: "ad874af797e833520e1981601d5efeefcf313d735f57b7308ad40bb2d8ccb664"
	I1213 08:31:58.768237   23603 cri.go:89] found id: "7ca9d8ef0232224f51f796a5a42b289083e38e83786d57e118ca410ae35b9a12"
	I1213 08:31:58.768241   23603 cri.go:89] found id: "2932c3775c90ead574169f47ba6b14c61f79b3b703cc7d808599349d052c6874"
	I1213 08:31:58.768255   23603 cri.go:89] found id: "66396849dd6c6c91e526433d374f5d0d1144295399e1fad4c6d195ee2f8c97dd"
	I1213 08:31:58.768263   23603 cri.go:89] found id: "3c9c834fe19b9b803891c5f93368f927b1f2634d843c5c238f3ff7eafcc9af21"
	I1213 08:31:58.768268   23603 cri.go:89] found id: "fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376"
	I1213 08:31:58.768271   23603 cri.go:89] found id: "f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c"
	I1213 08:31:58.768274   23603 cri.go:89] found id: "dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535"
	I1213 08:31:58.768276   23603 cri.go:89] found id: "ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d"
	I1213 08:31:58.768279   23603 cri.go:89] found id: "8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0"
	I1213 08:31:58.768282   23603 cri.go:89] found id: "1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955"
	I1213 08:31:58.768284   23603 cri.go:89] found id: "ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0"
	I1213 08:31:58.768287   23603 cri.go:89] found id: "ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90"
	I1213 08:31:58.768290   23603 cri.go:89] found id: ""
	I1213 08:31:58.768328   23603 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 08:31:58.782169   23603 out.go:203] 
	W1213 08:31:58.783457   23603 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 08:31:58.783476   23603 out.go:285] * 
	* 
	W1213 08:31:58.786466   23603 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:31:58.787777   23603 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-916029 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-916029 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (238.299416ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:31:58.846404   23665 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:31:58.846958   23665 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:58.846975   23665 out.go:374] Setting ErrFile to fd 2...
	I1213 08:31:58.846982   23665 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:58.847568   23665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:31:58.848220   23665 mustload.go:66] Loading cluster: addons-916029
	I1213 08:31:58.848601   23665 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:58.848624   23665 addons.go:622] checking whether the cluster is paused
	I1213 08:31:58.848710   23665 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:58.848722   23665 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:31:58.849069   23665 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:31:58.866662   23665 ssh_runner.go:195] Run: systemctl --version
	I1213 08:31:58.866717   23665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:31:58.883499   23665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:31:58.977790   23665 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:31:58.977862   23665 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:31:59.006411   23665 cri.go:89] found id: "1eacd1a75322bf66d02f5aa125525605bc7d40fb432d167238ec6b0a3bdbf941"
	I1213 08:31:59.006433   23665 cri.go:89] found id: "4bb25dd81c0541c58b998a5dd1e1d2fe7a443642ac20fb592394af3d836737d4"
	I1213 08:31:59.006438   23665 cri.go:89] found id: "698d7bca1539fb40aff53c0c6284f7f24e87951c737357b6c6498f4a0c2ccad9"
	I1213 08:31:59.006443   23665 cri.go:89] found id: "e4fc393aefc08283b1783fbbea7888ea26a275bfe57c0e05cd0d336b6ed4966e"
	I1213 08:31:59.006447   23665 cri.go:89] found id: "b4d11fbe011a5a6679dbdec4c460349ab7c99e5017e8334bb7893843af12fd12"
	I1213 08:31:59.006451   23665 cri.go:89] found id: "e74c5d67f2c68576df4d7df9e93a1012a053c238c27d92ad2331b9fa54a041cc"
	I1213 08:31:59.006455   23665 cri.go:89] found id: "9af098f389792a2ca60bfe746b36b7037affa3720d9bb625614e600d0b53f777"
	I1213 08:31:59.006459   23665 cri.go:89] found id: "64bbaac2488ef0e0b7e28fac6e2f2a932fb36905093e3287b2b5d0e89e837414"
	I1213 08:31:59.006464   23665 cri.go:89] found id: "8c199d50a7ce02f37c9f7ca0c310d99d98be3926d103644855fa912e30d253c8"
	I1213 08:31:59.006471   23665 cri.go:89] found id: "fde0db8ec43cfda390edee03eb437eb6a2b1d282c69405847fc4c41892df3581"
	I1213 08:31:59.006477   23665 cri.go:89] found id: "fe444a659a1b400f330d2f2a1de1e6b38fc3f00a0515e54e7d432c4581d6f457"
	I1213 08:31:59.006482   23665 cri.go:89] found id: "fa1c6645c83e0942a6fa2b033162596e626df4025e61894457e512273a89358b"
	I1213 08:31:59.006503   23665 cri.go:89] found id: "ad874af797e833520e1981601d5efeefcf313d735f57b7308ad40bb2d8ccb664"
	I1213 08:31:59.006511   23665 cri.go:89] found id: "7ca9d8ef0232224f51f796a5a42b289083e38e83786d57e118ca410ae35b9a12"
	I1213 08:31:59.006519   23665 cri.go:89] found id: "2932c3775c90ead574169f47ba6b14c61f79b3b703cc7d808599349d052c6874"
	I1213 08:31:59.006527   23665 cri.go:89] found id: "66396849dd6c6c91e526433d374f5d0d1144295399e1fad4c6d195ee2f8c97dd"
	I1213 08:31:59.006535   23665 cri.go:89] found id: "3c9c834fe19b9b803891c5f93368f927b1f2634d843c5c238f3ff7eafcc9af21"
	I1213 08:31:59.006559   23665 cri.go:89] found id: "fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376"
	I1213 08:31:59.006567   23665 cri.go:89] found id: "f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c"
	I1213 08:31:59.006572   23665 cri.go:89] found id: "dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535"
	I1213 08:31:59.006577   23665 cri.go:89] found id: "ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d"
	I1213 08:31:59.006585   23665 cri.go:89] found id: "8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0"
	I1213 08:31:59.006590   23665 cri.go:89] found id: "1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955"
	I1213 08:31:59.006598   23665 cri.go:89] found id: "ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0"
	I1213 08:31:59.006603   23665 cri.go:89] found id: "ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90"
	I1213 08:31:59.006610   23665 cri.go:89] found id: ""
	I1213 08:31:59.006655   23665 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 08:31:59.020193   23665 out.go:203] 
	W1213 08:31:59.021546   23665 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 08:31:59.021568   23665 out.go:285] * 
	* 
	W1213 08:31:59.024611   23665 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:31:59.025971   23665 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-916029 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (36.50s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-916029 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-916029 --alsologtostderr -v=1: exit status 11 (261.221884ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:31:12.092035   19740 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:31:12.092171   19740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:12.092183   19740 out.go:374] Setting ErrFile to fd 2...
	I1213 08:31:12.092187   19740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:12.092412   19740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:31:12.092718   19740 mustload.go:66] Loading cluster: addons-916029
	I1213 08:31:12.093139   19740 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:12.093160   19740 addons.go:622] checking whether the cluster is paused
	I1213 08:31:12.093261   19740 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:12.093274   19740 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:31:12.093735   19740 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:31:12.112323   19740 ssh_runner.go:195] Run: systemctl --version
	I1213 08:31:12.112372   19740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:31:12.130689   19740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:31:12.232365   19740 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:31:12.232431   19740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:31:12.262680   19740 cri.go:89] found id: "4bb25dd81c0541c58b998a5dd1e1d2fe7a443642ac20fb592394af3d836737d4"
	I1213 08:31:12.262716   19740 cri.go:89] found id: "698d7bca1539fb40aff53c0c6284f7f24e87951c737357b6c6498f4a0c2ccad9"
	I1213 08:31:12.262723   19740 cri.go:89] found id: "e4fc393aefc08283b1783fbbea7888ea26a275bfe57c0e05cd0d336b6ed4966e"
	I1213 08:31:12.262728   19740 cri.go:89] found id: "b4d11fbe011a5a6679dbdec4c460349ab7c99e5017e8334bb7893843af12fd12"
	I1213 08:31:12.262733   19740 cri.go:89] found id: "e74c5d67f2c68576df4d7df9e93a1012a053c238c27d92ad2331b9fa54a041cc"
	I1213 08:31:12.262739   19740 cri.go:89] found id: "9af098f389792a2ca60bfe746b36b7037affa3720d9bb625614e600d0b53f777"
	I1213 08:31:12.262744   19740 cri.go:89] found id: "64bbaac2488ef0e0b7e28fac6e2f2a932fb36905093e3287b2b5d0e89e837414"
	I1213 08:31:12.262751   19740 cri.go:89] found id: "8c199d50a7ce02f37c9f7ca0c310d99d98be3926d103644855fa912e30d253c8"
	I1213 08:31:12.262756   19740 cri.go:89] found id: "fde0db8ec43cfda390edee03eb437eb6a2b1d282c69405847fc4c41892df3581"
	I1213 08:31:12.262776   19740 cri.go:89] found id: "fe444a659a1b400f330d2f2a1de1e6b38fc3f00a0515e54e7d432c4581d6f457"
	I1213 08:31:12.262785   19740 cri.go:89] found id: "fa1c6645c83e0942a6fa2b033162596e626df4025e61894457e512273a89358b"
	I1213 08:31:12.262790   19740 cri.go:89] found id: "ad874af797e833520e1981601d5efeefcf313d735f57b7308ad40bb2d8ccb664"
	I1213 08:31:12.262797   19740 cri.go:89] found id: "7ca9d8ef0232224f51f796a5a42b289083e38e83786d57e118ca410ae35b9a12"
	I1213 08:31:12.262803   19740 cri.go:89] found id: "2932c3775c90ead574169f47ba6b14c61f79b3b703cc7d808599349d052c6874"
	I1213 08:31:12.262810   19740 cri.go:89] found id: "66396849dd6c6c91e526433d374f5d0d1144295399e1fad4c6d195ee2f8c97dd"
	I1213 08:31:12.262825   19740 cri.go:89] found id: "3c9c834fe19b9b803891c5f93368f927b1f2634d843c5c238f3ff7eafcc9af21"
	I1213 08:31:12.262833   19740 cri.go:89] found id: "fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376"
	I1213 08:31:12.262839   19740 cri.go:89] found id: "f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c"
	I1213 08:31:12.262844   19740 cri.go:89] found id: "dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535"
	I1213 08:31:12.262849   19740 cri.go:89] found id: "ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d"
	I1213 08:31:12.262860   19740 cri.go:89] found id: "8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0"
	I1213 08:31:12.262868   19740 cri.go:89] found id: "1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955"
	I1213 08:31:12.262872   19740 cri.go:89] found id: "ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0"
	I1213 08:31:12.262880   19740 cri.go:89] found id: "ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90"
	I1213 08:31:12.262885   19740 cri.go:89] found id: ""
	I1213 08:31:12.262942   19740 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 08:31:12.277777   19740 out.go:203] 
	W1213 08:31:12.279726   19740 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 08:31:12.279761   19740 out.go:285] * 
	* 
	W1213 08:31:12.283408   19740 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:31:12.285352   19740 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-916029 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-916029
helpers_test.go:244: (dbg) docker inspect addons-916029:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3894a43e7e24d1bd07160a75f88b7ab24966bb4b3ec32318651593cd1af3a1a4",
	        "Created": "2025-12-13T08:29:21.980066347Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11724,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T08:29:22.016232535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/3894a43e7e24d1bd07160a75f88b7ab24966bb4b3ec32318651593cd1af3a1a4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3894a43e7e24d1bd07160a75f88b7ab24966bb4b3ec32318651593cd1af3a1a4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3894a43e7e24d1bd07160a75f88b7ab24966bb4b3ec32318651593cd1af3a1a4/hosts",
	        "LogPath": "/var/lib/docker/containers/3894a43e7e24d1bd07160a75f88b7ab24966bb4b3ec32318651593cd1af3a1a4/3894a43e7e24d1bd07160a75f88b7ab24966bb4b3ec32318651593cd1af3a1a4-json.log",
	        "Name": "/addons-916029",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-916029:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-916029",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3894a43e7e24d1bd07160a75f88b7ab24966bb4b3ec32318651593cd1af3a1a4",
	                "LowerDir": "/var/lib/docker/overlay2/f905af889a0ffe5ffdfab92efe16906820ec25e53d85e0b60d622e2f3b35f5fe-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f905af889a0ffe5ffdfab92efe16906820ec25e53d85e0b60d622e2f3b35f5fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f905af889a0ffe5ffdfab92efe16906820ec25e53d85e0b60d622e2f3b35f5fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f905af889a0ffe5ffdfab92efe16906820ec25e53d85e0b60d622e2f3b35f5fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-916029",
	                "Source": "/var/lib/docker/volumes/addons-916029/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-916029",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-916029",
	                "name.minikube.sigs.k8s.io": "addons-916029",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dfb20378f22610eca91961f905378322aa67dfb22ae2b60c2f95b1e54a778df6",
	            "SandboxKey": "/var/run/docker/netns/dfb20378f226",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-916029": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1065423f0bf7f549d79255e3aec14adf8b6fc7a290fbc66b4874cee25a2f6f5d",
	                    "EndpointID": "e5f4d7fa8f3ae81408c4edfd4cc14ebb22920d7f52578a9880672c344b1c340a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "5e:4c:ec:08:97:5a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-916029",
	                        "3894a43e7e24"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-916029 -n addons-916029
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-916029 logs -n 25: (1.136627208s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-202898 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-202898   │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-202898                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-202898   │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-109226 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-109226   │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-109226                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-109226   │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-124765 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-124765   │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-124765                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-124765   │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-202898                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-202898   │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-109226                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-109226   │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-124765                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-124765   │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ start   │ --download-only -p download-docker-239724 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-239724 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ delete  │ -p download-docker-239724                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-239724 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ start   │ --download-only -p binary-mirror-949734 --alsologtostderr --binary-mirror http://127.0.0.1:46283 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-949734   │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ delete  │ -p binary-mirror-949734                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-949734   │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ addons  │ disable dashboard -p addons-916029                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-916029          │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ addons  │ enable dashboard -p addons-916029                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-916029          │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ start   │ -p addons-916029 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-916029          │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:31 UTC │
	│ addons  │ addons-916029 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-916029          │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ addons  │ addons-916029 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-916029          │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-916029 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-916029          │ jenkins │ v1.37.0 │ 13 Dec 25 08:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:28:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:28:58.330896   11062 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:28:58.331164   11062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:28:58.331175   11062 out.go:374] Setting ErrFile to fd 2...
	I1213 08:28:58.331182   11062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:28:58.331413   11062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:28:58.331969   11062 out.go:368] Setting JSON to false
	I1213 08:28:58.332774   11062 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":690,"bootTime":1765613848,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:28:58.332829   11062 start.go:143] virtualization: kvm guest
	I1213 08:28:58.334662   11062 out.go:179] * [addons-916029] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 08:28:58.336362   11062 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:28:58.336361   11062 notify.go:221] Checking for updates...
	I1213 08:28:58.339105   11062 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:28:58.340301   11062 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 08:28:58.341500   11062 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 08:28:58.342877   11062 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 08:28:58.344152   11062 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:28:58.345544   11062 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:28:58.368897   11062 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 08:28:58.369007   11062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:28:58.423115   11062 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-13 08:28:58.413568723 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 08:28:58.423215   11062 docker.go:319] overlay module found
	I1213 08:28:58.424959   11062 out.go:179] * Using the docker driver based on user configuration
	I1213 08:28:58.426230   11062 start.go:309] selected driver: docker
	I1213 08:28:58.426253   11062 start.go:927] validating driver "docker" against <nil>
	I1213 08:28:58.426265   11062 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:28:58.426855   11062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:28:58.478231   11062 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-13 08:28:58.46930263 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 08:28:58.478383   11062 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:28:58.478621   11062 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 08:28:58.480230   11062 out.go:179] * Using Docker driver with root privileges
	I1213 08:28:58.481347   11062 cni.go:84] Creating CNI manager for ""
	I1213 08:28:58.481407   11062 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 08:28:58.481420   11062 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 08:28:58.481475   11062 start.go:353] cluster config:
	{Name:addons-916029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-916029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1213 08:28:58.482751   11062 out.go:179] * Starting "addons-916029" primary control-plane node in "addons-916029" cluster
	I1213 08:28:58.483829   11062 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 08:28:58.484902   11062 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 08:28:58.486030   11062 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 08:28:58.486065   11062 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 08:28:58.486075   11062 cache.go:65] Caching tarball of preloaded images
	I1213 08:28:58.486131   11062 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 08:28:58.486170   11062 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 08:28:58.486185   11062 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 08:28:58.486629   11062 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/config.json ...
	I1213 08:28:58.486660   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/config.json: {Name:mke73757f22c2faac14c0204a0d0625a7a26d76a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:28:58.502430   11062 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f to local cache
	I1213 08:28:58.502581   11062 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory
	I1213 08:28:58.502600   11062 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local cache directory, skipping pull
	I1213 08:28:58.502604   11062 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in cache, skipping pull
	I1213 08:28:58.502612   11062 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f as a tarball
	I1213 08:28:58.502619   11062 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from local cache
	I1213 08:29:11.164096   11062 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f from cached tarball
	I1213 08:29:11.164135   11062 cache.go:243] Successfully downloaded all kic artifacts
	I1213 08:29:11.164197   11062 start.go:360] acquireMachinesLock for addons-916029: {Name:mk5895ba534e61d0049c0be22d884e3317bb56b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 08:29:11.164296   11062 start.go:364] duration metric: took 76.351µs to acquireMachinesLock for "addons-916029"
	I1213 08:29:11.164320   11062 start.go:93] Provisioning new machine with config: &{Name:addons-916029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-916029 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 08:29:11.164389   11062 start.go:125] createHost starting for "" (driver="docker")
	I1213 08:29:11.166113   11062 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1213 08:29:11.166316   11062 start.go:159] libmachine.API.Create for "addons-916029" (driver="docker")
	I1213 08:29:11.166344   11062 client.go:173] LocalClient.Create starting
	I1213 08:29:11.166419   11062 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem
	I1213 08:29:11.394456   11062 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem
	I1213 08:29:11.436358   11062 cli_runner.go:164] Run: docker network inspect addons-916029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 08:29:11.454587   11062 cli_runner.go:211] docker network inspect addons-916029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 08:29:11.454672   11062 network_create.go:284] running [docker network inspect addons-916029] to gather additional debugging logs...
	I1213 08:29:11.454694   11062 cli_runner.go:164] Run: docker network inspect addons-916029
	W1213 08:29:11.470235   11062 cli_runner.go:211] docker network inspect addons-916029 returned with exit code 1
	I1213 08:29:11.470261   11062 network_create.go:287] error running [docker network inspect addons-916029]: docker network inspect addons-916029: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-916029 not found
	I1213 08:29:11.470280   11062 network_create.go:289] output of [docker network inspect addons-916029]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-916029 not found
	
	** /stderr **
	I1213 08:29:11.470374   11062 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 08:29:11.487838   11062 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00170aa20}
	I1213 08:29:11.487886   11062 network_create.go:124] attempt to create docker network addons-916029 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 08:29:11.487935   11062 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-916029 addons-916029
	I1213 08:29:11.535152   11062 network_create.go:108] docker network addons-916029 192.168.49.0/24 created
	I1213 08:29:11.535184   11062 kic.go:121] calculated static IP "192.168.49.2" for the "addons-916029" container
	I1213 08:29:11.535243   11062 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 08:29:11.551306   11062 cli_runner.go:164] Run: docker volume create addons-916029 --label name.minikube.sigs.k8s.io=addons-916029 --label created_by.minikube.sigs.k8s.io=true
	I1213 08:29:11.568450   11062 oci.go:103] Successfully created a docker volume addons-916029
	I1213 08:29:11.568535   11062 cli_runner.go:164] Run: docker run --rm --name addons-916029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-916029 --entrypoint /usr/bin/test -v addons-916029:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 08:29:18.123717   11062 cli_runner.go:217] Completed: docker run --rm --name addons-916029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-916029 --entrypoint /usr/bin/test -v addons-916029:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib: (6.555137766s)
	I1213 08:29:18.123748   11062 oci.go:107] Successfully prepared a docker volume addons-916029
	I1213 08:29:18.123775   11062 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 08:29:18.123786   11062 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 08:29:18.123846   11062 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-916029:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 08:29:21.909269   11062 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-916029:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.785383863s)
	I1213 08:29:21.909298   11062 kic.go:203] duration metric: took 3.785511007s to extract preloaded images to volume ...
	W1213 08:29:21.909388   11062 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1213 08:29:21.909416   11062 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1213 08:29:21.909453   11062 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 08:29:21.963282   11062 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-916029 --name addons-916029 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-916029 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-916029 --network addons-916029 --ip 192.168.49.2 --volume addons-916029:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 08:29:22.265745   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Running}}
	I1213 08:29:22.284889   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:22.304303   11062 cli_runner.go:164] Run: docker exec addons-916029 stat /var/lib/dpkg/alternatives/iptables
	I1213 08:29:22.354219   11062 oci.go:144] the created container "addons-916029" has a running status.
	I1213 08:29:22.354254   11062 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa...
	I1213 08:29:22.394987   11062 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 08:29:22.428238   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:22.446724   11062 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 08:29:22.446746   11062 kic_runner.go:114] Args: [docker exec --privileged addons-916029 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 08:29:22.488608   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:22.508918   11062 machine.go:94] provisionDockerMachine start ...
	I1213 08:29:22.509023   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:22.533176   11062 main.go:143] libmachine: Using SSH client type: native
	I1213 08:29:22.533417   11062 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 08:29:22.533429   11062 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 08:29:22.534778   11062 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57410->127.0.0.1:32768: read: connection reset by peer
	I1213 08:29:25.668258   11062 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-916029
	
	I1213 08:29:25.668288   11062 ubuntu.go:182] provisioning hostname "addons-916029"
	I1213 08:29:25.668356   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:25.685794   11062 main.go:143] libmachine: Using SSH client type: native
	I1213 08:29:25.686090   11062 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 08:29:25.686104   11062 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-916029 && echo "addons-916029" | sudo tee /etc/hostname
	I1213 08:29:25.824765   11062 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-916029
	
	I1213 08:29:25.824845   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:25.842753   11062 main.go:143] libmachine: Using SSH client type: native
	I1213 08:29:25.842984   11062 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 08:29:25.843000   11062 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-916029' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-916029/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-916029' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 08:29:25.973973   11062 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 08:29:25.973999   11062 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-5776/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-5776/.minikube}
	I1213 08:29:25.974036   11062 ubuntu.go:190] setting up certificates
	I1213 08:29:25.974048   11062 provision.go:84] configureAuth start
	I1213 08:29:25.974106   11062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-916029
	I1213 08:29:25.991051   11062 provision.go:143] copyHostCerts
	I1213 08:29:25.991125   11062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem (1082 bytes)
	I1213 08:29:25.991243   11062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem (1123 bytes)
	I1213 08:29:25.991311   11062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem (1675 bytes)
	I1213 08:29:25.991363   11062 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem org=jenkins.addons-916029 san=[127.0.0.1 192.168.49.2 addons-916029 localhost minikube]
	I1213 08:29:26.068458   11062 provision.go:177] copyRemoteCerts
	I1213 08:29:26.068525   11062 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 08:29:26.068563   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:26.086015   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:26.181354   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 08:29:26.199612   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 08:29:26.216263   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 08:29:26.232776   11062 provision.go:87] duration metric: took 258.709066ms to configureAuth
	I1213 08:29:26.232801   11062 ubuntu.go:206] setting minikube options for container-runtime
	I1213 08:29:26.232972   11062 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:29:26.233080   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:26.250391   11062 main.go:143] libmachine: Using SSH client type: native
	I1213 08:29:26.250674   11062 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1213 08:29:26.250699   11062 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 08:29:26.512040   11062 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 08:29:26.512066   11062 machine.go:97] duration metric: took 4.003123158s to provisionDockerMachine
	I1213 08:29:26.512076   11062 client.go:176] duration metric: took 15.345726105s to LocalClient.Create
	I1213 08:29:26.512091   11062 start.go:167] duration metric: took 15.345777412s to libmachine.API.Create "addons-916029"
	I1213 08:29:26.512125   11062 start.go:293] postStartSetup for "addons-916029" (driver="docker")
	I1213 08:29:26.512138   11062 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 08:29:26.512194   11062 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 08:29:26.512241   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:26.528948   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:26.625750   11062 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 08:29:26.629067   11062 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 08:29:26.629091   11062 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 08:29:26.629101   11062 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/addons for local assets ...
	I1213 08:29:26.629151   11062 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/files for local assets ...
	I1213 08:29:26.629188   11062 start.go:296] duration metric: took 117.055364ms for postStartSetup
	I1213 08:29:26.629476   11062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-916029
	I1213 08:29:26.647571   11062 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/config.json ...
	I1213 08:29:26.647817   11062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:29:26.647857   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:26.664479   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:26.756306   11062 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 08:29:26.760661   11062 start.go:128] duration metric: took 15.596257379s to createHost
	I1213 08:29:26.760688   11062 start.go:83] releasing machines lock for "addons-916029", held for 15.596379109s
	I1213 08:29:26.760754   11062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-916029
	I1213 08:29:26.777424   11062 ssh_runner.go:195] Run: cat /version.json
	I1213 08:29:26.777472   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:26.777563   11062 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 08:29:26.777619   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:26.795512   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:26.795836   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:26.940192   11062 ssh_runner.go:195] Run: systemctl --version
	I1213 08:29:26.946399   11062 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 08:29:26.980496   11062 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 08:29:26.985031   11062 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 08:29:26.985089   11062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 08:29:27.009777   11062 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 08:29:27.009803   11062 start.go:496] detecting cgroup driver to use...
	I1213 08:29:27.009837   11062 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 08:29:27.009894   11062 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 08:29:27.025559   11062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 08:29:27.037360   11062 docker.go:218] disabling cri-docker service (if available) ...
	I1213 08:29:27.037404   11062 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 08:29:27.052811   11062 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 08:29:27.069049   11062 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 08:29:27.150572   11062 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 08:29:27.235023   11062 docker.go:234] disabling docker service ...
	I1213 08:29:27.235081   11062 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 08:29:27.252651   11062 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 08:29:27.264776   11062 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 08:29:27.346939   11062 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 08:29:27.427152   11062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 08:29:27.439351   11062 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 08:29:27.452889   11062 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 08:29:27.452946   11062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:29:27.462642   11062 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 08:29:27.462697   11062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:29:27.471119   11062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:29:27.479140   11062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:29:27.487683   11062 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 08:29:27.496178   11062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:29:27.504408   11062 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:29:27.517135   11062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 08:29:27.525066   11062 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 08:29:27.531605   11062 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1213 08:29:27.531659   11062 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1213 08:29:27.543556   11062 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 08:29:27.550600   11062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:29:27.628204   11062 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 08:29:27.753146   11062 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 08:29:27.753235   11062 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 08:29:27.757062   11062 start.go:564] Will wait 60s for crictl version
	I1213 08:29:27.757109   11062 ssh_runner.go:195] Run: which crictl
	I1213 08:29:27.760424   11062 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 08:29:27.784923   11062 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 08:29:27.785048   11062 ssh_runner.go:195] Run: crio --version
	I1213 08:29:27.812026   11062 ssh_runner.go:195] Run: crio --version
	I1213 08:29:27.840266   11062 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 08:29:27.841604   11062 cli_runner.go:164] Run: docker network inspect addons-916029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 08:29:27.858217   11062 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 08:29:27.862041   11062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 08:29:27.871907   11062 kubeadm.go:884] updating cluster {Name:addons-916029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-916029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 08:29:27.872021   11062 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 08:29:27.872068   11062 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:29:27.902679   11062 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 08:29:27.902700   11062 crio.go:433] Images already preloaded, skipping extraction
	I1213 08:29:27.902751   11062 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 08:29:27.926328   11062 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 08:29:27.926348   11062 cache_images.go:86] Images are preloaded, skipping loading
	I1213 08:29:27.926355   11062 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1213 08:29:27.926436   11062 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-916029 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-916029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 08:29:27.926541   11062 ssh_runner.go:195] Run: crio config
	I1213 08:29:27.970733   11062 cni.go:84] Creating CNI manager for ""
	I1213 08:29:27.970765   11062 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 08:29:27.970784   11062 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 08:29:27.970812   11062 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-916029 NodeName:addons-916029 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 08:29:27.970946   11062 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-916029"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 08:29:27.971025   11062 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 08:29:27.978864   11062 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 08:29:27.978928   11062 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 08:29:27.986099   11062 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 08:29:27.998099   11062 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 08:29:28.012519   11062 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1213 08:29:28.024069   11062 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 08:29:28.027246   11062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 08:29:28.037006   11062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:29:28.119015   11062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:29:28.147756   11062 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029 for IP: 192.168.49.2
	I1213 08:29:28.147778   11062 certs.go:195] generating shared ca certs ...
	I1213 08:29:28.147797   11062 certs.go:227] acquiring lock for ca certs: {Name:mk80892d6e61f0acf7b97550b52b3341a040901c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.147958   11062 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key
	I1213 08:29:28.264036   11062 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt ...
	I1213 08:29:28.264064   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt: {Name:mk6f4ee1daf6a670a71cd3dd080f8993bfdb577b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.264227   11062 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key ...
	I1213 08:29:28.264239   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key: {Name:mka38e30a6f036b3c2f294b94ec42c8b0adf6ee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.264316   11062 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key
	I1213 08:29:28.286058   11062 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt ...
	I1213 08:29:28.286079   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt: {Name:mk8ae9b8d202e16240cd2b000add4644ae6c0413 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.286193   11062 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key ...
	I1213 08:29:28.286204   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key: {Name:mkde16481c205129da63b86fd12ee04100fe81c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.286269   11062 certs.go:257] generating profile certs ...
	I1213 08:29:28.286319   11062 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.key
	I1213 08:29:28.286331   11062 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt with IP's: []
	I1213 08:29:28.313252   11062 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt ...
	I1213 08:29:28.313276   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: {Name:mk05226f9e9a0d94a2b200dc42c1ed79ca290688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.313422   11062 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.key ...
	I1213 08:29:28.313433   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.key: {Name:mk6d12a9d6d1bffb939c2c87b1972932835eef08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.313522   11062 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.key.b8a5591c
	I1213 08:29:28.313540   11062 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.crt.b8a5591c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 08:29:28.352215   11062 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.crt.b8a5591c ...
	I1213 08:29:28.352238   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.crt.b8a5591c: {Name:mkf26e5a215376a6d773f3e86a1c5513d5049010 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.352383   11062 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.key.b8a5591c ...
	I1213 08:29:28.352396   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.key.b8a5591c: {Name:mk6aff181408a3e98efe2b7ea62b1cca2d842d58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.352464   11062 certs.go:382] copying /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.crt.b8a5591c -> /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.crt
	I1213 08:29:28.352569   11062 certs.go:386] copying /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.key.b8a5591c -> /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.key
	I1213 08:29:28.352624   11062 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/proxy-client.key
	I1213 08:29:28.352641   11062 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/proxy-client.crt with IP's: []
	I1213 08:29:28.398368   11062 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/proxy-client.crt ...
	I1213 08:29:28.398401   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/proxy-client.crt: {Name:mk8a3e4f27724468ad6a80836fa15030ca4ea359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.398568   11062 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/proxy-client.key ...
	I1213 08:29:28.398578   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/proxy-client.key: {Name:mk554f31534e58c6c00827c652100f20a150e212 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:28.398756   11062 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 08:29:28.398795   11062 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem (1082 bytes)
	I1213 08:29:28.398820   11062 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem (1123 bytes)
	I1213 08:29:28.398842   11062 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem (1675 bytes)
	I1213 08:29:28.399482   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 08:29:28.417339   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 08:29:28.434480   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 08:29:28.451271   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 08:29:28.468261   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 08:29:28.484741   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 08:29:28.500854   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 08:29:28.517259   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 08:29:28.534219   11062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 08:29:28.552912   11062 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 08:29:28.565367   11062 ssh_runner.go:195] Run: openssl version
	I1213 08:29:28.571232   11062 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:29:28.578245   11062 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 08:29:28.588169   11062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:29:28.591928   11062 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:29:28.591992   11062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 08:29:28.625154   11062 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 08:29:28.632342   11062 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 08:29:28.639495   11062 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 08:29:28.643031   11062 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 08:29:28.643079   11062 kubeadm.go:401] StartCluster: {Name:addons-916029 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-916029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:29:28.643153   11062 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:29:28.643204   11062 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:29:28.668446   11062 cri.go:89] found id: ""
	I1213 08:29:28.668542   11062 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 08:29:28.676168   11062 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 08:29:28.683673   11062 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 08:29:28.683718   11062 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 08:29:28.691421   11062 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 08:29:28.691436   11062 kubeadm.go:158] found existing configuration files:
	
	I1213 08:29:28.691471   11062 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 08:29:28.698608   11062 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 08:29:28.698654   11062 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 08:29:28.705436   11062 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 08:29:28.712434   11062 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 08:29:28.712481   11062 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 08:29:28.719122   11062 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 08:29:28.725986   11062 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 08:29:28.726028   11062 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 08:29:28.732675   11062 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 08:29:28.739730   11062 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 08:29:28.739774   11062 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 08:29:28.747045   11062 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 08:29:28.781516   11062 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 08:29:28.781570   11062 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 08:29:28.812595   11062 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 08:29:28.812670   11062 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1213 08:29:28.812715   11062 kubeadm.go:319] OS: Linux
	I1213 08:29:28.812767   11062 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 08:29:28.812818   11062 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 08:29:28.812887   11062 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 08:29:28.812930   11062 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 08:29:28.812971   11062 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 08:29:28.813023   11062 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 08:29:28.813065   11062 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 08:29:28.813147   11062 kubeadm.go:319] CGROUPS_IO: enabled
	I1213 08:29:28.869856   11062 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 08:29:28.870049   11062 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 08:29:28.870185   11062 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 08:29:28.876369   11062 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 08:29:28.878372   11062 out.go:252]   - Generating certificates and keys ...
	I1213 08:29:28.878469   11062 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1213 08:29:28.878571   11062 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1213 08:29:28.981268   11062 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 08:29:29.118074   11062 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1213 08:29:29.167474   11062 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1213 08:29:29.274514   11062 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1213 08:29:29.344379   11062 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1213 08:29:29.344554   11062 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-916029 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 08:29:29.470053   11062 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1213 08:29:29.470234   11062 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-916029 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 08:29:29.633396   11062 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 08:29:29.818086   11062 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 08:29:30.123327   11062 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1213 08:29:30.123425   11062 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 08:29:30.541521   11062 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 08:29:30.689228   11062 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 08:29:30.771292   11062 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 08:29:30.840360   11062 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 08:29:31.354737   11062 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 08:29:31.356324   11062 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 08:29:31.360141   11062 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 08:29:31.361866   11062 out.go:252]   - Booting up control plane ...
	I1213 08:29:31.361966   11062 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 08:29:31.362049   11062 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 08:29:31.362452   11062 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 08:29:31.375837   11062 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 08:29:31.375953   11062 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 08:29:31.382435   11062 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 08:29:31.382661   11062 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 08:29:31.382708   11062 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 08:29:31.477369   11062 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 08:29:31.477562   11062 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 08:29:31.978671   11062 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.388074ms
	I1213 08:29:31.982412   11062 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 08:29:31.982555   11062 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1213 08:29:31.982716   11062 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 08:29:31.982842   11062 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 08:29:33.164784   11062 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.182519867s
	I1213 08:29:34.101969   11062 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.119816186s
	I1213 08:29:35.983786   11062 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001377853s
	I1213 08:29:35.999243   11062 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 08:29:36.009094   11062 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 08:29:36.017138   11062 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 08:29:36.017408   11062 kubeadm.go:319] [mark-control-plane] Marking the node addons-916029 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 08:29:36.024532   11062 kubeadm.go:319] [bootstrap-token] Using token: h5re1o.w3neybk59b02aves
	I1213 08:29:36.025714   11062 out.go:252]   - Configuring RBAC rules ...
	I1213 08:29:36.025824   11062 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 08:29:36.028532   11062 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 08:29:36.033112   11062 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 08:29:36.036330   11062 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 08:29:36.038711   11062 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 08:29:36.040878   11062 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 08:29:36.389845   11062 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 08:29:36.803732   11062 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 08:29:37.391873   11062 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 08:29:37.392718   11062 kubeadm.go:319] 
	I1213 08:29:37.392811   11062 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 08:29:37.392822   11062 kubeadm.go:319] 
	I1213 08:29:37.392934   11062 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 08:29:37.392962   11062 kubeadm.go:319] 
	I1213 08:29:37.393018   11062 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 08:29:37.393099   11062 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 08:29:37.393168   11062 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 08:29:37.393195   11062 kubeadm.go:319] 
	I1213 08:29:37.393266   11062 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 08:29:37.393275   11062 kubeadm.go:319] 
	I1213 08:29:37.393335   11062 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 08:29:37.393344   11062 kubeadm.go:319] 
	I1213 08:29:37.393424   11062 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 08:29:37.393581   11062 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 08:29:37.393701   11062 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 08:29:37.393710   11062 kubeadm.go:319] 
	I1213 08:29:37.393812   11062 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 08:29:37.393928   11062 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 08:29:37.393936   11062 kubeadm.go:319] 
	I1213 08:29:37.394052   11062 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token h5re1o.w3neybk59b02aves \
	I1213 08:29:37.394169   11062 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ee58f815f85fc315c500e095f56504e491b6ed949bed649ee5693cfd8113bd8c \
	I1213 08:29:37.394194   11062 kubeadm.go:319] 	--control-plane 
	I1213 08:29:37.394207   11062 kubeadm.go:319] 
	I1213 08:29:37.394316   11062 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 08:29:37.394323   11062 kubeadm.go:319] 
	I1213 08:29:37.394418   11062 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token h5re1o.w3neybk59b02aves \
	I1213 08:29:37.394591   11062 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ee58f815f85fc315c500e095f56504e491b6ed949bed649ee5693cfd8113bd8c 
	I1213 08:29:37.396351   11062 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1213 08:29:37.396457   11062 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 08:29:37.396501   11062 cni.go:84] Creating CNI manager for ""
	I1213 08:29:37.396515   11062 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 08:29:37.399202   11062 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 08:29:37.400376   11062 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 08:29:37.404544   11062 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1213 08:29:37.404563   11062 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 08:29:37.416939   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 08:29:37.612049   11062 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 08:29:37.612128   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:37.612172   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-916029 minikube.k8s.io/updated_at=2025_12_13T08_29_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=addons-916029 minikube.k8s.io/primary=true
	I1213 08:29:37.622775   11062 ops.go:34] apiserver oom_adj: -16
	I1213 08:29:37.697240   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:38.197554   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:38.697476   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:39.198035   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:39.697347   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:40.198073   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:40.697616   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:41.197698   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:41.697916   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:42.197356   11062 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 08:29:42.260586   11062 kubeadm.go:1114] duration metric: took 4.648516896s to wait for elevateKubeSystemPrivileges
	I1213 08:29:42.260623   11062 kubeadm.go:403] duration metric: took 13.617549011s to StartCluster
	I1213 08:29:42.260642   11062 settings.go:142] acquiring lock: {Name:mkb7db33d439ccf76620dab7051026432361ebb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:42.260795   11062 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 08:29:42.261315   11062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 08:29:42.261542   11062 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 08:29:42.261570   11062 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 08:29:42.261639   11062 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 08:29:42.261746   11062 addons.go:70] Setting yakd=true in profile "addons-916029"
	I1213 08:29:42.261747   11062 addons.go:70] Setting gcp-auth=true in profile "addons-916029"
	I1213 08:29:42.261793   11062 addons.go:239] Setting addon yakd=true in "addons-916029"
	I1213 08:29:42.261802   11062 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:29:42.261811   11062 mustload.go:66] Loading cluster: addons-916029
	I1213 08:29:42.261830   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.261804   11062 addons.go:70] Setting registry=true in profile "addons-916029"
	I1213 08:29:42.261830   11062 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-916029"
	I1213 08:29:42.261877   11062 addons.go:239] Setting addon registry=true in "addons-916029"
	I1213 08:29:42.261886   11062 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-916029"
	I1213 08:29:42.261935   11062 addons.go:70] Setting inspektor-gadget=true in profile "addons-916029"
	I1213 08:29:42.261951   11062 addons.go:239] Setting addon inspektor-gadget=true in "addons-916029"
	I1213 08:29:42.261958   11062 addons.go:70] Setting metrics-server=true in profile "addons-916029"
	I1213 08:29:42.261931   11062 addons.go:70] Setting volcano=true in profile "addons-916029"
	I1213 08:29:42.261973   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.261979   11062 addons.go:239] Setting addon metrics-server=true in "addons-916029"
	I1213 08:29:42.261996   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.262005   11062 addons.go:239] Setting addon volcano=true in "addons-916029"
	I1213 08:29:42.262051   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.262077   11062 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:29:42.262316   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.262326   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.262396   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.262479   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.262507   11062 addons.go:70] Setting registry-creds=true in profile "addons-916029"
	I1213 08:29:42.262524   11062 addons.go:239] Setting addon registry-creds=true in "addons-916029"
	I1213 08:29:42.262543   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.262647   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.262702   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.263050   11062 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-916029"
	I1213 08:29:42.263076   11062 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-916029"
	I1213 08:29:42.263101   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.263852   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.264277   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.264461   11062 addons.go:70] Setting ingress=true in profile "addons-916029"
	I1213 08:29:42.264476   11062 addons.go:239] Setting addon ingress=true in "addons-916029"
	I1213 08:29:42.264525   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.264609   11062 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-916029"
	I1213 08:29:42.264634   11062 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-916029"
	I1213 08:29:42.264765   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.264983   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.265030   11062 addons.go:70] Setting volumesnapshots=true in profile "addons-916029"
	I1213 08:29:42.265047   11062 addons.go:239] Setting addon volumesnapshots=true in "addons-916029"
	I1213 08:29:42.265093   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.265590   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.265695   11062 out.go:179] * Verifying Kubernetes components...
	I1213 08:29:42.265928   11062 addons.go:70] Setting storage-provisioner=true in profile "addons-916029"
	I1213 08:29:42.265966   11062 addons.go:239] Setting addon storage-provisioner=true in "addons-916029"
	I1213 08:29:42.266005   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.269924   11062 addons.go:70] Setting default-storageclass=true in profile "addons-916029"
	I1213 08:29:42.269970   11062 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-916029"
	I1213 08:29:42.275697   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.278297   11062 addons.go:70] Setting cloud-spanner=true in profile "addons-916029"
	I1213 08:29:42.278364   11062 addons.go:239] Setting addon cloud-spanner=true in "addons-916029"
	I1213 08:29:42.278403   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.279180   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.280716   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.281222   11062 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-916029"
	I1213 08:29:42.261939   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.282881   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.283062   11062 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-916029"
	I1213 08:29:42.283118   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.283719   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.284036   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.282118   11062 addons.go:70] Setting ingress-dns=true in profile "addons-916029"
	I1213 08:29:42.284857   11062 addons.go:239] Setting addon ingress-dns=true in "addons-916029"
	I1213 08:29:42.284935   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.289309   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.290992   11062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 08:29:42.315265   11062 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1213 08:29:42.319021   11062 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 08:29:42.319047   11062 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 08:29:42.319116   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.322205   11062 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-916029"
	I1213 08:29:42.322255   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.325342   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.334593   11062 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1213 08:29:42.338657   11062 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 08:29:42.338681   11062 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 08:29:42.338757   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.343923   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.347113   11062 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 08:29:42.348564   11062 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 08:29:42.348583   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 08:29:42.348638   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.348684   11062 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1213 08:29:42.350426   11062 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 08:29:42.350580   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1213 08:29:42.350849   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	W1213 08:29:42.365235   11062 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 08:29:42.375419   11062 addons.go:239] Setting addon default-storageclass=true in "addons-916029"
	I1213 08:29:42.375954   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:42.376855   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:42.378004   11062 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1213 08:29:42.378065   11062 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1213 08:29:42.378084   11062 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1213 08:29:42.378099   11062 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1213 08:29:42.382023   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 08:29:42.383341   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 08:29:42.383401   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 08:29:42.383644   11062 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 08:29:42.383662   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1213 08:29:42.383720   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.383880   11062 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 08:29:42.383894   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1213 08:29:42.383927   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.384071   11062 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 08:29:42.384082   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 08:29:42.384112   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.385503   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 08:29:42.385710   11062 out.go:179]   - Using image docker.io/registry:3.0.0
	I1213 08:29:42.385737   11062 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 08:29:42.385813   11062 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1213 08:29:42.385845   11062 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 08:29:42.387302   11062 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 08:29:42.387355   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.387103   11062 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 08:29:42.387569   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 08:29:42.387148   11062 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:29:42.387600   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 08:29:42.387643   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.387959   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.388615   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 08:29:42.391632   11062 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 08:29:42.392830   11062 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 08:29:42.393359   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 08:29:42.394136   11062 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 08:29:42.394159   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 08:29:42.394317   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.396150   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 08:29:42.400112   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 08:29:42.401463   11062 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 08:29:42.402958   11062 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 08:29:42.402980   11062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 08:29:42.403047   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.409143   11062 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 08:29:42.409517   11062 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1213 08:29:42.410471   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.411770   11062 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 08:29:42.411956   11062 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1213 08:29:42.411968   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 08:29:42.412020   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.415252   11062 out.go:179]   - Using image docker.io/busybox:stable
	I1213 08:29:42.416462   11062 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 08:29:42.416478   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 08:29:42.416611   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.444217   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.444511   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.444609   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.451686   11062 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 08:29:42.451987   11062 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 08:29:42.452073   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:42.453130   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.454807   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.455478   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.459272   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.464569   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.468583   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.473147   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.484653   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.485577   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.486560   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:42.493548   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	W1213 08:29:42.495451   11062 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1213 08:29:42.495587   11062 retry.go:31] will retry after 238.053176ms: ssh: handshake failed: EOF
	I1213 08:29:42.500856   11062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 08:29:42.585945   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1213 08:29:42.607531   11062 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 08:29:42.607555   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 08:29:42.608951   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 08:29:42.615637   11062 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 08:29:42.615659   11062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 08:29:42.629881   11062 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 08:29:42.629903   11062 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 08:29:42.630220   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 08:29:42.636360   11062 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 08:29:42.636382   11062 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 08:29:42.643430   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 08:29:42.644434   11062 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 08:29:42.644451   11062 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 08:29:42.648651   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 08:29:42.649387   11062 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 08:29:42.649411   11062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 08:29:42.651203   11062 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 08:29:42.651221   11062 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 08:29:42.653380   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 08:29:42.663524   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 08:29:42.671575   11062 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 08:29:42.671598   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 08:29:42.676085   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 08:29:42.676617   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 08:29:42.681684   11062 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 08:29:42.681709   11062 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 08:29:42.701223   11062 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 08:29:42.701249   11062 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 08:29:42.704407   11062 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 08:29:42.704432   11062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 08:29:42.704702   11062 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 08:29:42.704724   11062 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 08:29:42.716942   11062 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1213 08:29:42.717798   11062 node_ready.go:35] waiting up to 6m0s for node "addons-916029" to be "Ready" ...
	I1213 08:29:42.723589   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 08:29:42.727742   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 08:29:42.758616   11062 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 08:29:42.758661   11062 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 08:29:42.770022   11062 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 08:29:42.770053   11062 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 08:29:42.779875   11062 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 08:29:42.779910   11062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 08:29:42.816659   11062 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 08:29:42.816756   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 08:29:42.820551   11062 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 08:29:42.820626   11062 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 08:29:42.831279   11062 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 08:29:42.831302   11062 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 08:29:42.868870   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 08:29:42.869732   11062 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 08:29:42.869753   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 08:29:42.898798   11062 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 08:29:42.898823   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 08:29:42.932220   11062 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 08:29:42.932271   11062 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 08:29:42.941338   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 08:29:42.960138   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 08:29:42.984454   11062 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 08:29:42.984477   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 08:29:43.023740   11062 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 08:29:43.023842   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 08:29:43.078113   11062 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 08:29:43.078223   11062 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1213 08:29:43.111521   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 08:29:43.228716   11062 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-916029" context rescaled to 1 replicas
	I1213 08:29:43.810695   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.157277498s)
	I1213 08:29:43.810728   11062 addons.go:495] Verifying addon ingress=true in "addons-916029"
	I1213 08:29:43.810936   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.147379833s)
	I1213 08:29:43.811215   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.135074015s)
	I1213 08:29:43.811463   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.134821754s)
	I1213 08:29:43.811618   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.08793918s)
	I1213 08:29:43.811636   11062 addons.go:495] Verifying addon registry=true in "addons-916029"
	I1213 08:29:43.811833   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.084015903s)
	I1213 08:29:43.811986   11062 addons.go:495] Verifying addon metrics-server=true in "addons-916029"
	I1213 08:29:43.812285   11062 out.go:179] * Verifying ingress addon...
	I1213 08:29:43.813094   11062 out.go:179] * Verifying registry addon...
	I1213 08:29:43.813157   11062 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-916029 service yakd-dashboard -n yakd-dashboard
	
	I1213 08:29:43.814635   11062 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 08:29:43.816059   11062 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 08:29:43.819875   11062 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 08:29:43.819893   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:43.820039   11062 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 08:29:43.820059   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1213 08:29:43.827056   11062 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1213 08:29:44.212680   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.252491004s)
	W1213 08:29:44.212742   11062 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 08:29:44.212766   11062 retry.go:31] will retry after 348.428289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 08:29:44.212985   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.101379779s)
	I1213 08:29:44.213008   11062 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-916029"
	I1213 08:29:44.215272   11062 out.go:179] * Verifying csi-hostpath-driver addon...
	I1213 08:29:44.217400   11062 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 08:29:44.221241   11062 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 08:29:44.221264   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:44.321743   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:44.321928   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:44.561521   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 08:29:44.720621   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 08:29:44.720783   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:29:44.817722   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:44.819048   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:45.221063   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:45.321884   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:45.322087   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:45.720559   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:45.821248   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:45.821512   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:46.221044   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:46.317956   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:46.318385   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:46.720692   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:46.821053   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:46.821342   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:47.023475   11062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.461916079s)
	W1213 08:29:47.220505   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:29:47.220626   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:47.321261   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:47.321436   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:47.720386   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:47.820665   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:47.820730   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:48.220450   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:48.319995   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:48.320105   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:48.720925   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:48.821473   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:48.821640   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:49.220782   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:49.317522   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:49.318997   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1213 08:29:49.720522   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:29:49.720643   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:49.822265   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:49.822471   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:49.949647   11062 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 08:29:49.949705   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:49.967364   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:50.067308   11062 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 08:29:50.079458   11062 addons.go:239] Setting addon gcp-auth=true in "addons-916029"
	I1213 08:29:50.079512   11062 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:29:50.079834   11062 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:29:50.097255   11062 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 08:29:50.097395   11062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:29:50.114777   11062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:29:50.208253   11062 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1213 08:29:50.209494   11062 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 08:29:50.210689   11062 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 08:29:50.210701   11062 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 08:29:50.220550   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:50.224251   11062 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 08:29:50.224268   11062 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 08:29:50.236189   11062 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 08:29:50.236209   11062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 08:29:50.248172   11062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 08:29:50.318020   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:50.318716   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:50.538781   11062 addons.go:495] Verifying addon gcp-auth=true in "addons-916029"
	I1213 08:29:50.540141   11062 out.go:179] * Verifying gcp-auth addon...
	I1213 08:29:50.542002   11062 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 08:29:50.544266   11062 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 08:29:50.544281   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:50.721273   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:50.817883   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:50.818405   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:51.044765   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:51.220309   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:51.318214   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:51.318716   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:51.545449   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:51.719920   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 08:29:51.720599   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:29:51.818099   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:51.818708   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:52.045512   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:52.219934   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:52.317734   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:52.318103   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:52.544811   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:52.720756   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:52.817389   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:52.818865   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:53.044457   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:53.219906   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:53.317626   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:53.318241   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:53.544929   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1213 08:29:53.720678   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:29:53.720807   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:53.817134   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:53.818558   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:54.044977   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:54.220601   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:54.318328   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:54.318853   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:54.544407   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:54.720212   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:54.818098   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:54.818836   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:55.045511   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:55.219937   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:55.317313   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:55.318918   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:55.544451   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:55.720282   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:55.817928   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:55.818577   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:56.045141   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1213 08:29:56.220609   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:29:56.220710   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:56.318014   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:56.318830   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:56.545465   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:56.720280   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:56.817295   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:56.818727   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:57.045911   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:57.220343   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:57.318393   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:57.318498   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:57.545172   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:57.720862   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:57.817350   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:57.818871   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:58.045559   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:58.219901   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:58.317673   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:58.318192   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:58.544932   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1213 08:29:58.720579   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:29:58.720860   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:58.817645   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:58.818207   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:59.044724   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:59.220297   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:59.317879   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:59.318357   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:29:59.544927   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:29:59.720547   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:29:59.818001   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:29:59.818595   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:00.045062   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:00.220784   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:00.317284   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:00.318739   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:00.545308   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1213 08:30:00.720707   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:00.720969   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:00.817988   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:00.818430   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:01.045134   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:01.220710   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:01.317281   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:01.319256   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:01.544805   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:01.720258   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:01.818241   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:01.818848   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:02.044397   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:02.220013   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:02.317430   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:02.318980   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:02.544560   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:02.720342   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:02.818065   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:02.818593   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:03.045156   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1213 08:30:03.220827   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:03.220931   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:03.317401   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:03.318888   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:03.544630   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:03.720094   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:03.817813   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:03.818285   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:04.044909   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:04.220575   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:04.318025   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:04.318821   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:04.545507   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:04.720055   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:04.818236   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:04.818308   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:05.044848   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:05.220319   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:05.317763   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:05.318465   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:05.545137   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1213 08:30:05.720550   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:05.720695   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:05.817504   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:05.819030   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:06.044563   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:06.220341   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:06.317955   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:06.318412   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:06.545110   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:06.720888   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:06.818236   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:06.819120   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:07.044937   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:07.220558   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:07.318232   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:07.318789   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:07.545613   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:07.720689   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:07.817351   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:07.818971   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:08.044420   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:08.220053   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 08:30:08.220662   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:08.318072   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:08.318763   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:08.544509   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:08.720114   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:08.818262   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:08.818913   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:09.044381   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:09.220015   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:09.317250   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:09.318874   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:09.545439   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:09.719951   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:09.818229   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:09.818686   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:10.045455   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:10.220056   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 08:30:10.220737   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:10.317099   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:10.318796   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:10.545455   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:10.720004   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:10.817213   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:10.818903   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:11.044399   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:11.219838   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:11.317585   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:11.318334   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:11.544897   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:11.720530   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:11.818475   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:11.818645   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:12.045211   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:12.220800   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:12.317580   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:12.319020   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:12.544743   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1213 08:30:12.720156   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:12.720238   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:12.818017   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:12.818471   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:13.045198   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:13.220669   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:13.317369   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:13.319166   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:13.544921   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:13.720745   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:13.817206   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:13.818861   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:14.045393   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:14.219825   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:14.318251   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:14.318633   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:14.545355   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:14.719980   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:14.819171   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:14.819298   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:15.045383   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:15.220886   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 08:30:15.220914   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:15.317646   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:15.318357   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:15.545244   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:15.721078   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:15.817865   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:15.818253   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:16.044911   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:16.220455   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:16.317914   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:16.318625   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:16.545375   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:16.720872   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:16.817768   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:16.818182   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:17.044760   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:17.220311   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:17.318180   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:17.318480   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:17.545124   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:17.720649   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 08:30:17.720665   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:17.818099   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:17.818807   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:18.045518   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:18.220213   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:18.317668   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:18.318404   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:18.545247   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:18.720816   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:18.817272   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:18.818901   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:19.044364   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:19.219956   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:19.317317   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:19.318848   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:19.545154   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:19.720650   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:19.817292   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:19.819340   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:20.044826   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:20.220590   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 08:30:20.220590   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:20.318065   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:20.318778   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:20.545432   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:20.720175   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:20.818307   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:20.818992   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:21.044409   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:21.219797   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:21.318300   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:21.318847   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:21.545325   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:21.720866   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:21.817534   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:21.819175   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:22.044755   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:22.220304   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:22.318114   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:22.318596   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:22.545675   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:22.720195   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1213 08:30:22.720243   11062 node_ready.go:57] node "addons-916029" has "Ready":"False" status (will retry)
	I1213 08:30:22.817801   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:22.818482   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:23.045012   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:23.220846   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:23.317509   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:23.318119   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:23.545129   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:23.720857   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:23.817617   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:23.819108   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:24.044512   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:24.220098   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:24.319106   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:24.321811   11062 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 08:30:24.321835   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:24.545174   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:24.723012   11062 node_ready.go:49] node "addons-916029" is "Ready"
	I1213 08:30:24.723045   11062 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 08:30:24.723050   11062 node_ready.go:38] duration metric: took 42.00523054s for node "addons-916029" to be "Ready" ...
	I1213 08:30:24.723060   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:24.723072   11062 api_server.go:52] waiting for apiserver process to appear ...
	I1213 08:30:24.723156   11062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:30:24.742804   11062 api_server.go:72] duration metric: took 42.481197036s to wait for apiserver process to appear ...
	I1213 08:30:24.742832   11062 api_server.go:88] waiting for apiserver healthz status ...
	I1213 08:30:24.742859   11062 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1213 08:30:24.748144   11062 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1213 08:30:24.749200   11062 api_server.go:141] control plane version: v1.34.2
	I1213 08:30:24.749229   11062 api_server.go:131] duration metric: took 6.388845ms to wait for apiserver health ...
	I1213 08:30:24.749240   11062 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 08:30:24.822215   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:24.822244   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:24.824723   11062 system_pods.go:59] 20 kube-system pods found
	I1213 08:30:24.824819   11062 system_pods.go:61] "amd-gpu-device-plugin-vwtp8" [9c59b49f-4ccd-41d7-a843-2e0044c03209] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 08:30:24.824834   11062 system_pods.go:61] "coredns-66bc5c9577-lp9sl" [00ef7b8e-0135-493f-922d-344c52c4baed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 08:30:24.824854   11062 system_pods.go:61] "csi-hostpath-attacher-0" [bad8e904-0537-4df7-9c54-a019ca492be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 08:30:24.824863   11062 system_pods.go:61] "csi-hostpath-resizer-0" [1b0fe61b-a000-4634-9e4b-035aa0d6f505] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 08:30:24.824879   11062 system_pods.go:61] "csi-hostpathplugin-btrm5" [0ba2f162-e12c-49c5-baa7-d4fd92d5a90e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 08:30:24.824887   11062 system_pods.go:61] "etcd-addons-916029" [208f86d1-2d1e-4d79-a5ee-f590c29596e1] Running
	I1213 08:30:24.824901   11062 system_pods.go:61] "kindnet-qpw8x" [72c26dae-5349-4b65-a4c7-18f040fb6031] Running
	I1213 08:30:24.824910   11062 system_pods.go:61] "kube-apiserver-addons-916029" [ca20d598-d9b7-49a9-b276-72c2a8862f22] Running
	I1213 08:30:24.824916   11062 system_pods.go:61] "kube-controller-manager-addons-916029" [60b922fa-1bec-4e0a-80f7-3ac3b3f1dc8a] Running
	I1213 08:30:24.824939   11062 system_pods.go:61] "kube-ingress-dns-minikube" [df5cf388-860f-4482-be0b-dc78781a80a1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 08:30:24.824949   11062 system_pods.go:61] "kube-proxy-kr7zc" [b0afa5ad-6da8-4a6f-9f27-a80a864f2cd0] Running
	I1213 08:30:24.824956   11062 system_pods.go:61] "kube-scheduler-addons-916029" [b9c61963-82d5-4bc1-80f8-b26393c435b8] Running
	I1213 08:30:24.824974   11062 system_pods.go:61] "metrics-server-85b7d694d7-zrm5x" [b5b354f9-a97f-4873-8ef3-19058bdced38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 08:30:24.824988   11062 system_pods.go:61] "nvidia-device-plugin-daemonset-ss6tf" [31114449-a40e-4a7c-a76a-da5a506f3892] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 08:30:24.825003   11062 system_pods.go:61] "registry-6b586f9694-xvfhz" [0232b267-4d81-470a-80c9-4f84718b005f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 08:30:24.825023   11062 system_pods.go:61] "registry-creds-764b6fb674-vj2wj" [d277eaca-8fc8-4604-81bc-7e6c4ab2feeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 08:30:24.825036   11062 system_pods.go:61] "registry-proxy-cd6hw" [2ed47d8d-3ce8-4770-8877-300d97e12e3a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 08:30:24.825054   11062 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4d65q" [03557cbf-b655-4ef1-ac50-79bf69649e3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:24.825065   11062 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wfsd6" [0f6d1e6f-b032-4afa-afb5-1a02cb7ea87b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:24.825074   11062 system_pods.go:61] "storage-provisioner" [8fbeefe7-3a90-470b-96ac-8422ad3a8592] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 08:30:24.825082   11062 system_pods.go:74] duration metric: took 75.834975ms to wait for pod list to return data ...
	I1213 08:30:24.825097   11062 default_sa.go:34] waiting for default service account to be created ...
	I1213 08:30:24.827448   11062 default_sa.go:45] found service account: "default"
	I1213 08:30:24.827472   11062 default_sa.go:55] duration metric: took 2.369082ms for default service account to be created ...
	I1213 08:30:24.827500   11062 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 08:30:24.923932   11062 system_pods.go:86] 20 kube-system pods found
	I1213 08:30:24.923968   11062 system_pods.go:89] "amd-gpu-device-plugin-vwtp8" [9c59b49f-4ccd-41d7-a843-2e0044c03209] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 08:30:24.923979   11062 system_pods.go:89] "coredns-66bc5c9577-lp9sl" [00ef7b8e-0135-493f-922d-344c52c4baed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 08:30:24.923988   11062 system_pods.go:89] "csi-hostpath-attacher-0" [bad8e904-0537-4df7-9c54-a019ca492be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 08:30:24.923996   11062 system_pods.go:89] "csi-hostpath-resizer-0" [1b0fe61b-a000-4634-9e4b-035aa0d6f505] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 08:30:24.924004   11062 system_pods.go:89] "csi-hostpathplugin-btrm5" [0ba2f162-e12c-49c5-baa7-d4fd92d5a90e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 08:30:24.924014   11062 system_pods.go:89] "etcd-addons-916029" [208f86d1-2d1e-4d79-a5ee-f590c29596e1] Running
	I1213 08:30:24.924026   11062 system_pods.go:89] "kindnet-qpw8x" [72c26dae-5349-4b65-a4c7-18f040fb6031] Running
	I1213 08:30:24.924035   11062 system_pods.go:89] "kube-apiserver-addons-916029" [ca20d598-d9b7-49a9-b276-72c2a8862f22] Running
	I1213 08:30:24.924042   11062 system_pods.go:89] "kube-controller-manager-addons-916029" [60b922fa-1bec-4e0a-80f7-3ac3b3f1dc8a] Running
	I1213 08:30:24.924056   11062 system_pods.go:89] "kube-ingress-dns-minikube" [df5cf388-860f-4482-be0b-dc78781a80a1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 08:30:24.924064   11062 system_pods.go:89] "kube-proxy-kr7zc" [b0afa5ad-6da8-4a6f-9f27-a80a864f2cd0] Running
	I1213 08:30:24.924072   11062 system_pods.go:89] "kube-scheduler-addons-916029" [b9c61963-82d5-4bc1-80f8-b26393c435b8] Running
	I1213 08:30:24.924083   11062 system_pods.go:89] "metrics-server-85b7d694d7-zrm5x" [b5b354f9-a97f-4873-8ef3-19058bdced38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 08:30:24.924096   11062 system_pods.go:89] "nvidia-device-plugin-daemonset-ss6tf" [31114449-a40e-4a7c-a76a-da5a506f3892] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 08:30:24.924108   11062 system_pods.go:89] "registry-6b586f9694-xvfhz" [0232b267-4d81-470a-80c9-4f84718b005f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 08:30:24.924117   11062 system_pods.go:89] "registry-creds-764b6fb674-vj2wj" [d277eaca-8fc8-4604-81bc-7e6c4ab2feeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 08:30:24.924128   11062 system_pods.go:89] "registry-proxy-cd6hw" [2ed47d8d-3ce8-4770-8877-300d97e12e3a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 08:30:24.924140   11062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4d65q" [03557cbf-b655-4ef1-ac50-79bf69649e3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:24.924153   11062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wfsd6" [0f6d1e6f-b032-4afa-afb5-1a02cb7ea87b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:24.924166   11062 system_pods.go:89] "storage-provisioner" [8fbeefe7-3a90-470b-96ac-8422ad3a8592] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 08:30:24.924197   11062 retry.go:31] will retry after 300.692208ms: missing components: kube-dns
	I1213 08:30:25.045996   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:25.223089   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:25.230000   11062 system_pods.go:86] 20 kube-system pods found
	I1213 08:30:25.230037   11062 system_pods.go:89] "amd-gpu-device-plugin-vwtp8" [9c59b49f-4ccd-41d7-a843-2e0044c03209] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 08:30:25.230049   11062 system_pods.go:89] "coredns-66bc5c9577-lp9sl" [00ef7b8e-0135-493f-922d-344c52c4baed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 08:30:25.230058   11062 system_pods.go:89] "csi-hostpath-attacher-0" [bad8e904-0537-4df7-9c54-a019ca492be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 08:30:25.230066   11062 system_pods.go:89] "csi-hostpath-resizer-0" [1b0fe61b-a000-4634-9e4b-035aa0d6f505] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 08:30:25.230074   11062 system_pods.go:89] "csi-hostpathplugin-btrm5" [0ba2f162-e12c-49c5-baa7-d4fd92d5a90e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 08:30:25.230093   11062 system_pods.go:89] "etcd-addons-916029" [208f86d1-2d1e-4d79-a5ee-f590c29596e1] Running
	I1213 08:30:25.230100   11062 system_pods.go:89] "kindnet-qpw8x" [72c26dae-5349-4b65-a4c7-18f040fb6031] Running
	I1213 08:30:25.230106   11062 system_pods.go:89] "kube-apiserver-addons-916029" [ca20d598-d9b7-49a9-b276-72c2a8862f22] Running
	I1213 08:30:25.230112   11062 system_pods.go:89] "kube-controller-manager-addons-916029" [60b922fa-1bec-4e0a-80f7-3ac3b3f1dc8a] Running
	I1213 08:30:25.230122   11062 system_pods.go:89] "kube-ingress-dns-minikube" [df5cf388-860f-4482-be0b-dc78781a80a1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 08:30:25.230128   11062 system_pods.go:89] "kube-proxy-kr7zc" [b0afa5ad-6da8-4a6f-9f27-a80a864f2cd0] Running
	I1213 08:30:25.230134   11062 system_pods.go:89] "kube-scheduler-addons-916029" [b9c61963-82d5-4bc1-80f8-b26393c435b8] Running
	I1213 08:30:25.230145   11062 system_pods.go:89] "metrics-server-85b7d694d7-zrm5x" [b5b354f9-a97f-4873-8ef3-19058bdced38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 08:30:25.230155   11062 system_pods.go:89] "nvidia-device-plugin-daemonset-ss6tf" [31114449-a40e-4a7c-a76a-da5a506f3892] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 08:30:25.230163   11062 system_pods.go:89] "registry-6b586f9694-xvfhz" [0232b267-4d81-470a-80c9-4f84718b005f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 08:30:25.230171   11062 system_pods.go:89] "registry-creds-764b6fb674-vj2wj" [d277eaca-8fc8-4604-81bc-7e6c4ab2feeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 08:30:25.230179   11062 system_pods.go:89] "registry-proxy-cd6hw" [2ed47d8d-3ce8-4770-8877-300d97e12e3a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 08:30:25.230187   11062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4d65q" [03557cbf-b655-4ef1-ac50-79bf69649e3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:25.230195   11062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wfsd6" [0f6d1e6f-b032-4afa-afb5-1a02cb7ea87b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:25.230210   11062 system_pods.go:89] "storage-provisioner" [8fbeefe7-3a90-470b-96ac-8422ad3a8592] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 08:30:25.230226   11062 retry.go:31] will retry after 242.687821ms: missing components: kube-dns
	I1213 08:30:25.319034   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:25.319959   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:25.478139   11062 system_pods.go:86] 20 kube-system pods found
	I1213 08:30:25.478186   11062 system_pods.go:89] "amd-gpu-device-plugin-vwtp8" [9c59b49f-4ccd-41d7-a843-2e0044c03209] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 08:30:25.478199   11062 system_pods.go:89] "coredns-66bc5c9577-lp9sl" [00ef7b8e-0135-493f-922d-344c52c4baed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 08:30:25.478210   11062 system_pods.go:89] "csi-hostpath-attacher-0" [bad8e904-0537-4df7-9c54-a019ca492be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 08:30:25.478219   11062 system_pods.go:89] "csi-hostpath-resizer-0" [1b0fe61b-a000-4634-9e4b-035aa0d6f505] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 08:30:25.478228   11062 system_pods.go:89] "csi-hostpathplugin-btrm5" [0ba2f162-e12c-49c5-baa7-d4fd92d5a90e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 08:30:25.478234   11062 system_pods.go:89] "etcd-addons-916029" [208f86d1-2d1e-4d79-a5ee-f590c29596e1] Running
	I1213 08:30:25.478242   11062 system_pods.go:89] "kindnet-qpw8x" [72c26dae-5349-4b65-a4c7-18f040fb6031] Running
	I1213 08:30:25.478250   11062 system_pods.go:89] "kube-apiserver-addons-916029" [ca20d598-d9b7-49a9-b276-72c2a8862f22] Running
	I1213 08:30:25.478256   11062 system_pods.go:89] "kube-controller-manager-addons-916029" [60b922fa-1bec-4e0a-80f7-3ac3b3f1dc8a] Running
	I1213 08:30:25.478265   11062 system_pods.go:89] "kube-ingress-dns-minikube" [df5cf388-860f-4482-be0b-dc78781a80a1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 08:30:25.478270   11062 system_pods.go:89] "kube-proxy-kr7zc" [b0afa5ad-6da8-4a6f-9f27-a80a864f2cd0] Running
	I1213 08:30:25.478277   11062 system_pods.go:89] "kube-scheduler-addons-916029" [b9c61963-82d5-4bc1-80f8-b26393c435b8] Running
	I1213 08:30:25.478286   11062 system_pods.go:89] "metrics-server-85b7d694d7-zrm5x" [b5b354f9-a97f-4873-8ef3-19058bdced38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 08:30:25.478294   11062 system_pods.go:89] "nvidia-device-plugin-daemonset-ss6tf" [31114449-a40e-4a7c-a76a-da5a506f3892] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 08:30:25.478310   11062 system_pods.go:89] "registry-6b586f9694-xvfhz" [0232b267-4d81-470a-80c9-4f84718b005f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 08:30:25.478318   11062 system_pods.go:89] "registry-creds-764b6fb674-vj2wj" [d277eaca-8fc8-4604-81bc-7e6c4ab2feeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 08:30:25.478325   11062 system_pods.go:89] "registry-proxy-cd6hw" [2ed47d8d-3ce8-4770-8877-300d97e12e3a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 08:30:25.478335   11062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4d65q" [03557cbf-b655-4ef1-ac50-79bf69649e3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:25.478346   11062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wfsd6" [0f6d1e6f-b032-4afa-afb5-1a02cb7ea87b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:25.478354   11062 system_pods.go:89] "storage-provisioner" [8fbeefe7-3a90-470b-96ac-8422ad3a8592] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 08:30:25.478372   11062 retry.go:31] will retry after 482.920653ms: missing components: kube-dns
	I1213 08:30:25.545686   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:25.720945   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:25.821274   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:25.821454   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:25.966471   11062 system_pods.go:86] 20 kube-system pods found
	I1213 08:30:25.966522   11062 system_pods.go:89] "amd-gpu-device-plugin-vwtp8" [9c59b49f-4ccd-41d7-a843-2e0044c03209] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1213 08:30:25.966532   11062 system_pods.go:89] "coredns-66bc5c9577-lp9sl" [00ef7b8e-0135-493f-922d-344c52c4baed] Running
	I1213 08:30:25.966544   11062 system_pods.go:89] "csi-hostpath-attacher-0" [bad8e904-0537-4df7-9c54-a019ca492be9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1213 08:30:25.966554   11062 system_pods.go:89] "csi-hostpath-resizer-0" [1b0fe61b-a000-4634-9e4b-035aa0d6f505] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1213 08:30:25.966563   11062 system_pods.go:89] "csi-hostpathplugin-btrm5" [0ba2f162-e12c-49c5-baa7-d4fd92d5a90e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1213 08:30:25.966570   11062 system_pods.go:89] "etcd-addons-916029" [208f86d1-2d1e-4d79-a5ee-f590c29596e1] Running
	I1213 08:30:25.966577   11062 system_pods.go:89] "kindnet-qpw8x" [72c26dae-5349-4b65-a4c7-18f040fb6031] Running
	I1213 08:30:25.966585   11062 system_pods.go:89] "kube-apiserver-addons-916029" [ca20d598-d9b7-49a9-b276-72c2a8862f22] Running
	I1213 08:30:25.966592   11062 system_pods.go:89] "kube-controller-manager-addons-916029" [60b922fa-1bec-4e0a-80f7-3ac3b3f1dc8a] Running
	I1213 08:30:25.966602   11062 system_pods.go:89] "kube-ingress-dns-minikube" [df5cf388-860f-4482-be0b-dc78781a80a1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1213 08:30:25.966609   11062 system_pods.go:89] "kube-proxy-kr7zc" [b0afa5ad-6da8-4a6f-9f27-a80a864f2cd0] Running
	I1213 08:30:25.966616   11062 system_pods.go:89] "kube-scheduler-addons-916029" [b9c61963-82d5-4bc1-80f8-b26393c435b8] Running
	I1213 08:30:25.966625   11062 system_pods.go:89] "metrics-server-85b7d694d7-zrm5x" [b5b354f9-a97f-4873-8ef3-19058bdced38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1213 08:30:25.966634   11062 system_pods.go:89] "nvidia-device-plugin-daemonset-ss6tf" [31114449-a40e-4a7c-a76a-da5a506f3892] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1213 08:30:25.966644   11062 system_pods.go:89] "registry-6b586f9694-xvfhz" [0232b267-4d81-470a-80c9-4f84718b005f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1213 08:30:25.966655   11062 system_pods.go:89] "registry-creds-764b6fb674-vj2wj" [d277eaca-8fc8-4604-81bc-7e6c4ab2feeb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1213 08:30:25.966664   11062 system_pods.go:89] "registry-proxy-cd6hw" [2ed47d8d-3ce8-4770-8877-300d97e12e3a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1213 08:30:25.966674   11062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4d65q" [03557cbf-b655-4ef1-ac50-79bf69649e3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:25.966686   11062 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wfsd6" [0f6d1e6f-b032-4afa-afb5-1a02cb7ea87b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1213 08:30:25.966693   11062 system_pods.go:89] "storage-provisioner" [8fbeefe7-3a90-470b-96ac-8422ad3a8592] Running
	I1213 08:30:25.966705   11062 system_pods.go:126] duration metric: took 1.139196644s to wait for k8s-apps to be running ...
	I1213 08:30:25.966717   11062 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 08:30:25.966779   11062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:30:25.983322   11062 system_svc.go:56] duration metric: took 16.595233ms WaitForService to wait for kubelet
	I1213 08:30:25.983352   11062 kubeadm.go:587] duration metric: took 43.721750528s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 08:30:25.983377   11062 node_conditions.go:102] verifying NodePressure condition ...
	I1213 08:30:25.985760   11062 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 08:30:25.985785   11062 node_conditions.go:123] node cpu capacity is 8
	I1213 08:30:25.985805   11062 node_conditions.go:105] duration metric: took 2.421957ms to run NodePressure ...
	I1213 08:30:25.985821   11062 start.go:242] waiting for startup goroutines ...
	I1213 08:30:26.046084   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:26.221580   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:26.318510   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:26.318755   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:26.545470   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:26.720657   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:26.818712   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:26.820190   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:27.045270   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:27.221587   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:27.318432   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:27.318822   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:27.546047   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:27.721440   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:27.818459   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:27.818891   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:28.045339   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:28.222096   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:28.318181   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:28.318449   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:28.545760   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:28.721148   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:28.817932   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:28.818339   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:29.044782   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:29.220741   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:29.318671   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:29.319050   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:29.544612   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:29.720243   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:29.818006   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:29.818735   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:30.046314   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:30.221056   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:30.319870   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:30.319891   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:30.545695   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:30.720828   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:30.817936   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:30.821037   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:31.045088   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:31.221130   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:31.317913   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:31.318464   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:31.545668   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:31.720791   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:31.818549   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:31.819107   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:32.058222   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:32.221175   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:32.317617   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:32.318469   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:32.545595   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:32.720808   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:32.819085   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:32.819085   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:33.045048   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:33.221401   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:33.318178   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:33.318535   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:33.546357   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:33.721065   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:33.817452   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:33.819064   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:34.045944   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:34.220833   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:34.317528   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:34.319231   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:34.545552   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:34.720585   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:34.818404   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:34.819389   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:35.044914   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:35.222845   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:35.317965   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:35.319289   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:35.545227   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:35.722069   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:35.817159   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:35.818593   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:36.047498   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:36.222898   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:36.317689   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:36.319870   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:36.545743   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:36.720882   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:36.873589   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:36.873679   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:37.127949   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:37.221456   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:37.318139   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:37.318850   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:37.546405   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:37.720509   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:37.818768   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:37.818820   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:38.046268   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:38.221362   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:38.318021   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:38.318862   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:38.545214   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:38.721430   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:38.819019   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:38.819859   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:39.045982   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:39.221010   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:39.317920   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:39.319199   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:39.545189   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:39.721577   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:39.818527   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:39.819378   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:40.045753   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:40.221292   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:40.363864   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:40.363976   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:40.545472   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:40.720557   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:40.820432   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:40.820582   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:41.045349   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:41.220801   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:41.318313   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:41.318963   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:41.544273   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:41.720959   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:41.817248   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:41.818943   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:42.046317   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:42.221419   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:42.320708   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:42.321323   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:42.545553   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:42.721519   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:42.818507   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:42.818897   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:43.045369   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:43.221370   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:43.397914   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:43.398118   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:43.545633   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:43.720381   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:43.818085   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:43.818799   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:44.045393   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:44.220353   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:44.320776   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:44.320914   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:44.545979   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:44.720731   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:44.817364   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:44.818861   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:45.046218   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:45.221309   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:45.318641   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:45.318698   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:45.545996   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:45.721797   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:45.818522   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:45.818896   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:46.046418   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:46.221333   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:46.317882   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:46.318504   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:46.544591   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:46.720102   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:46.817934   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:46.818341   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:47.045951   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:47.221275   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:47.318037   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:47.318577   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:47.545565   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:47.720508   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:47.818817   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:47.818865   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:48.045828   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:48.220723   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:48.318179   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:48.318974   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:48.545000   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:48.721118   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:48.818111   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:48.818645   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 08:30:49.046048   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:49.221271   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:49.318099   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:49.318595   11062 kapi.go:107] duration metric: took 1m5.502535555s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 08:30:49.545382   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 08:30:49.721686   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:49.818124   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:50.046582   11062 kapi.go:107] duration metric: took 59.504574412s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 08:30:50.048566   11062 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-916029 cluster.
	I1213 08:30:50.051997   11062 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 08:30:50.054289   11062 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 08:30:50.220332   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:50.323059   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:50.723793   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:50.817798   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:51.221837   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:51.318380   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:51.720835   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:51.818325   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:52.245192   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:52.317734   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:52.721143   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:52.817899   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:53.221322   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:53.317821   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:53.721657   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:53.821790   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:54.221309   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:54.317857   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:54.721293   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:54.817593   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:55.220985   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:55.317834   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:55.721235   11062 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 08:30:55.822140   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:56.221467   11062 kapi.go:107] duration metric: took 1m12.004069041s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 08:30:56.322576   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:56.820253   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:57.318380   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:57.818060   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:58.318854   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:58.818416   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:59.318968   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:30:59.818458   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:00.317770   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:00.818547   11062 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 08:31:01.318030   11062 kapi.go:107] duration metric: took 1m17.503392746s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 08:31:01.319844   11062 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, inspektor-gadget, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1213 08:31:01.321163   11062 addons.go:530] duration metric: took 1m19.059528306s for enable addons: enabled=[registry-creds nvidia-device-plugin amd-gpu-device-plugin storage-provisioner inspektor-gadget cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1213 08:31:01.321212   11062 start.go:247] waiting for cluster config update ...
	I1213 08:31:01.321240   11062 start.go:256] writing updated cluster config ...
	I1213 08:31:01.321519   11062 ssh_runner.go:195] Run: rm -f paused
	I1213 08:31:01.325385   11062 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 08:31:01.328129   11062 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lp9sl" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:01.331670   11062 pod_ready.go:94] pod "coredns-66bc5c9577-lp9sl" is "Ready"
	I1213 08:31:01.331689   11062 pod_ready.go:86] duration metric: took 3.542ms for pod "coredns-66bc5c9577-lp9sl" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:01.333372   11062 pod_ready.go:83] waiting for pod "etcd-addons-916029" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:01.336499   11062 pod_ready.go:94] pod "etcd-addons-916029" is "Ready"
	I1213 08:31:01.336514   11062 pod_ready.go:86] duration metric: took 3.12678ms for pod "etcd-addons-916029" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:01.338070   11062 pod_ready.go:83] waiting for pod "kube-apiserver-addons-916029" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:01.341312   11062 pod_ready.go:94] pod "kube-apiserver-addons-916029" is "Ready"
	I1213 08:31:01.341332   11062 pod_ready.go:86] duration metric: took 3.245475ms for pod "kube-apiserver-addons-916029" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:01.342848   11062 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-916029" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:01.728162   11062 pod_ready.go:94] pod "kube-controller-manager-addons-916029" is "Ready"
	I1213 08:31:01.728184   11062 pod_ready.go:86] duration metric: took 385.321785ms for pod "kube-controller-manager-addons-916029" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:01.928893   11062 pod_ready.go:83] waiting for pod "kube-proxy-kr7zc" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:02.329346   11062 pod_ready.go:94] pod "kube-proxy-kr7zc" is "Ready"
	I1213 08:31:02.329377   11062 pod_ready.go:86] duration metric: took 400.458359ms for pod "kube-proxy-kr7zc" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:02.529165   11062 pod_ready.go:83] waiting for pod "kube-scheduler-addons-916029" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:02.928836   11062 pod_ready.go:94] pod "kube-scheduler-addons-916029" is "Ready"
	I1213 08:31:02.928860   11062 pod_ready.go:86] duration metric: took 399.667147ms for pod "kube-scheduler-addons-916029" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 08:31:02.928875   11062 pod_ready.go:40] duration metric: took 1.603468197s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 08:31:02.976973   11062 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 08:31:02.978919   11062 out.go:179] * Done! kubectl is now configured to use "addons-916029" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 08:31:05 addons-916029 crio[778]: time="2025-12-13T08:31:05.555059779Z" level=info msg="Starting container: b2a20bd000d244285bae3f31f1b385faaf398e408c02e20ba4d35e58d1069e17" id=23acc7c3-c10f-47c3-bd57-db127bd8ea60 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 08:31:05 addons-916029 crio[778]: time="2025-12-13T08:31:05.55680712Z" level=info msg="Started container" PID=6206 containerID=b2a20bd000d244285bae3f31f1b385faaf398e408c02e20ba4d35e58d1069e17 description=default/busybox/busybox id=23acc7c3-c10f-47c3-bd57-db127bd8ea60 name=/runtime.v1.RuntimeService/StartContainer sandboxID=421c33a2994e600ad4d744965cb59f0f50e0b2749fd9e11b1b943759d41b0521
	Dec 13 08:31:12 addons-916029 crio[778]: time="2025-12-13T08:31:12.622229643Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188/POD" id=35316e77-c93f-4700-809a-6244e58516df name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 08:31:12 addons-916029 crio[778]: time="2025-12-13T08:31:12.622386927Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 08:31:12 addons-916029 crio[778]: time="2025-12-13T08:31:12.62940619Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188 Namespace:local-path-storage ID:c7d26b4658c3e589bfe172203790abfc2bc17550acf5f970270e561834a96526 UID:dbf0a4f8-348d-473b-8543-2ebad9c8ad01 NetNS:/var/run/netns/51b0e971-acb0-4747-acc5-250bf265a77f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ad60}] Aliases:map[]}"
	Dec 13 08:31:12 addons-916029 crio[778]: time="2025-12-13T08:31:12.629436016Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188 to CNI network \"kindnet\" (type=ptp)"
	Dec 13 08:31:12 addons-916029 crio[778]: time="2025-12-13T08:31:12.640368172Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188 Namespace:local-path-storage ID:c7d26b4658c3e589bfe172203790abfc2bc17550acf5f970270e561834a96526 UID:dbf0a4f8-348d-473b-8543-2ebad9c8ad01 NetNS:/var/run/netns/51b0e971-acb0-4747-acc5-250bf265a77f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ad60}] Aliases:map[]}"
	Dec 13 08:31:12 addons-916029 crio[778]: time="2025-12-13T08:31:12.640514778Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188 for CNI network kindnet (type=ptp)"
	Dec 13 08:31:12 addons-916029 crio[778]: time="2025-12-13T08:31:12.641286462Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 08:31:12 addons-916029 crio[778]: time="2025-12-13T08:31:12.642026861Z" level=info msg="Ran pod sandbox c7d26b4658c3e589bfe172203790abfc2bc17550acf5f970270e561834a96526 with infra container: local-path-storage/helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188/POD" id=35316e77-c93f-4700-809a-6244e58516df name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 08:31:12 addons-916029 crio[778]: time="2025-12-13T08:31:12.643229492Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=8682d1a0-a5e9-418f-a241-b5d69d7073e9 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 08:31:12 addons-916029 crio[778]: time="2025-12-13T08:31:12.643397175Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=8682d1a0-a5e9-418f-a241-b5d69d7073e9 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 08:31:12 addons-916029 crio[778]: time="2025-12-13T08:31:12.643446695Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=8682d1a0-a5e9-418f-a241-b5d69d7073e9 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 08:31:12 addons-916029 crio[778]: time="2025-12-13T08:31:12.643956231Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=b1214596-e9d1-401d-af2d-fd1a2c75cb25 name=/runtime.v1.ImageService/PullImage
	Dec 13 08:31:12 addons-916029 crio[778]: time="2025-12-13T08:31:12.64543693Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Dec 13 08:31:13 addons-916029 crio[778]: time="2025-12-13T08:31:13.096581878Z" level=info msg="Pulled image: docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee" id=b1214596-e9d1-401d-af2d-fd1a2c75cb25 name=/runtime.v1.ImageService/PullImage
	Dec 13 08:31:13 addons-916029 crio[778]: time="2025-12-13T08:31:13.097161246Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=93d936e8-34ff-47bc-bbcd-7162c174b1ab name=/runtime.v1.ImageService/ImageStatus
	Dec 13 08:31:13 addons-916029 crio[778]: time="2025-12-13T08:31:13.09913199Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=3b9623f7-3001-40cc-8bfb-30a5fe39a8e3 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 08:31:13 addons-916029 crio[778]: time="2025-12-13T08:31:13.103015372Z" level=info msg="Creating container: local-path-storage/helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188/helper-pod" id=edb01b86-82ba-4389-b010-037256a8c1b5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 08:31:13 addons-916029 crio[778]: time="2025-12-13T08:31:13.103134239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 08:31:13 addons-916029 crio[778]: time="2025-12-13T08:31:13.108883387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 08:31:13 addons-916029 crio[778]: time="2025-12-13T08:31:13.109284597Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 08:31:13 addons-916029 crio[778]: time="2025-12-13T08:31:13.155657455Z" level=info msg="Created container c21d94dc65c6eb8da13eee08e418b427aa96780f97b8c5221f4a8a5ed04e8b21: local-path-storage/helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188/helper-pod" id=edb01b86-82ba-4389-b010-037256a8c1b5 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 08:31:13 addons-916029 crio[778]: time="2025-12-13T08:31:13.156299163Z" level=info msg="Starting container: c21d94dc65c6eb8da13eee08e418b427aa96780f97b8c5221f4a8a5ed04e8b21" id=3c929de3-3c1f-460d-b791-934f9afca9be name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 08:31:13 addons-916029 crio[778]: time="2025-12-13T08:31:13.157997802Z" level=info msg="Started container" PID=6471 containerID=c21d94dc65c6eb8da13eee08e418b427aa96780f97b8c5221f4a8a5ed04e8b21 description=local-path-storage/helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188/helper-pod id=3c929de3-3c1f-460d-b791-934f9afca9be name=/runtime.v1.RuntimeService/StartContainer sandboxID=c7d26b4658c3e589bfe172203790abfc2bc17550acf5f970270e561834a96526
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	c21d94dc65c6e       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            Less than a second ago   Exited              helper-pod                               0                   c7d26b4658c3e       helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188   local-path-storage
	b2a20bd000d24       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago            Running             busybox                                  0                   421c33a2994e6       busybox                                                      default
	e6db23cda0e2e       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             13 seconds ago           Running             controller                               0                   2f3afb1dcee5f       ingress-nginx-controller-85d4c799dd-mh7x6                    ingress-nginx
	4bb25dd81c054       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          17 seconds ago           Running             csi-snapshotter                          0                   607392f7905ed       csi-hostpathplugin-btrm5                                     kube-system
	698d7bca1539f       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          18 seconds ago           Running             csi-provisioner                          0                   607392f7905ed       csi-hostpathplugin-btrm5                                     kube-system
	e4fc393aefc08       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            19 seconds ago           Running             liveness-probe                           0                   607392f7905ed       csi-hostpathplugin-btrm5                                     kube-system
	b4d11fbe011a5       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           20 seconds ago           Running             hostpath                                 0                   607392f7905ed       csi-hostpathplugin-btrm5                                     kube-system
	1adfcaf697f26       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            20 seconds ago           Running             gadget                                   0                   f6705a881897d       gadget-f6967                                                 gadget
	946292da75844       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             20 seconds ago           Exited              patch                                    2                   79c87d0ba134e       ingress-nginx-admission-patch-hwg9m                          ingress-nginx
	e74c5d67f2c68       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                22 seconds ago           Running             node-driver-registrar                    0                   607392f7905ed       csi-hostpathplugin-btrm5                                     kube-system
	550d4b374da52       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 23 seconds ago           Running             gcp-auth                                 0                   c1091482eca05       gcp-auth-78565c9fb4-trzh2                                    gcp-auth
	cdb0926d96122       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             23 seconds ago           Exited              patch                                    2                   c17f6ff7e2bef       gcp-auth-certs-patch-522qt                                   gcp-auth
	9af098f389792       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              25 seconds ago           Running             registry-proxy                           0                   2c68489e2f00d       registry-proxy-cd6hw                                         kube-system
	64bbaac2488ef       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   27 seconds ago           Running             csi-external-health-monitor-controller   0                   607392f7905ed       csi-hostpathplugin-btrm5                                     kube-system
	8c199d50a7ce0       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     28 seconds ago           Running             amd-gpu-device-plugin                    0                   cdf0b4efaaf9f       amd-gpu-device-plugin-vwtp8                                  kube-system
	d4e6af613c7f6       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             28 seconds ago           Running             local-path-provisioner                   0                   830d3b663457b       local-path-provisioner-648f6765c9-kl88n                      local-path-storage
	fde0db8ec43cf       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     29 seconds ago           Running             nvidia-device-plugin-ctr                 0                   7224a44197102       nvidia-device-plugin-daemonset-ss6tf                         kube-system
	e6ccab73be6a9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   32 seconds ago           Exited              create                                   0                   b7d0583cf0d44       ingress-nginx-admission-create-k5s7l                         ingress-nginx
	fe444a659a1b4       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        32 seconds ago           Running             metrics-server                           0                   440e12a30501e       metrics-server-85b7d694d7-zrm5x                              kube-system
	fa1c6645c83e0       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              34 seconds ago           Running             csi-resizer                              0                   d60cc7958cb9f       csi-hostpath-resizer-0                                       kube-system
	73d0270b822b7       docker.io/marcnuri/yakd@sha256:ef51bed688eb0feab1405f97b7286dfe1da3c61e5a189ce4ae34a90c9f9cf8aa                                              35 seconds ago           Running             yakd                                     0                   7c96883ed6893       yakd-dashboard-6654c87f9b-z5th7                              yakd-dashboard
	da278c988a5bf       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   37 seconds ago           Exited              create                                   0                   28d4d24b0f1b4       gcp-auth-certs-create-dbms6                                  gcp-auth
	ad874af797e83       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             37 seconds ago           Running             csi-attacher                             0                   70bec611eeeca       csi-hostpath-attacher-0                                      kube-system
	7ca9d8ef02322       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      38 seconds ago           Running             volume-snapshot-controller               0                   58a3d0fa25fb2       snapshot-controller-7d9fbc56b8-wfsd6                         kube-system
	2932c3775c90e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      38 seconds ago           Running             volume-snapshot-controller               0                   bc3b1ea1c391d       snapshot-controller-7d9fbc56b8-4d65q                         kube-system
	66396849dd6c6       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           39 seconds ago           Running             registry                                 0                   a4c2fb473d934       registry-6b586f9694-xvfhz                                    kube-system
	f41a46f1596b5       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               41 seconds ago           Running             cloud-spanner-emulator                   0                   237ba51eb498a       cloud-spanner-emulator-5bdddb765-gtmzw                       default
	3c9c834fe19b9       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               44 seconds ago           Running             minikube-ingress-dns                     0                   e6b3b097d0715       kube-ingress-dns-minikube                                    kube-system
	fe903b8244ca7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             48 seconds ago           Running             coredns                                  0                   2c8fd1fb4e8bd       coredns-66bc5c9577-lp9sl                                     kube-system
	f0ce98858d71b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             48 seconds ago           Running             storage-provisioner                      0                   a39dd6791e5cc       storage-provisioner                                          kube-system
	dc46446aa2f04       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                                             About a minute ago       Running             kube-proxy                               0                   e5ddd168c148a       kube-proxy-kr7zc                                             kube-system
	ee8f73e803fab       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago       Running             kindnet-cni                              0                   9cf5b1100e741       kindnet-qpw8x                                                kube-system
	8c1744d0402c3       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                                             About a minute ago       Running             kube-apiserver                           0                   8de7a85804b11       kube-apiserver-addons-916029                                 kube-system
	1814222a5735c       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                                             About a minute ago       Running             kube-controller-manager                  0                   b11095866ee3e       kube-controller-manager-addons-916029                        kube-system
	ff2d7aaca1ac9       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                                             About a minute ago       Running             kube-scheduler                           0                   6f59e9719337c       kube-scheduler-addons-916029                                 kube-system
	ea0dc0efd19f5       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago       Running             etcd                                     0                   e6108698a27b8       etcd-addons-916029                                           kube-system
	
	
	==> coredns [fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376] <==
	[INFO] 10.244.0.18:49774 - 48032 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000179849s
	[INFO] 10.244.0.18:44330 - 49195 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000089669s
	[INFO] 10.244.0.18:44330 - 48910 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102388s
	[INFO] 10.244.0.18:41298 - 8917 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000045673s
	[INFO] 10.244.0.18:41298 - 8581 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000067356s
	[INFO] 10.244.0.18:34066 - 46321 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000066469s
	[INFO] 10.244.0.18:34066 - 45838 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000099049s
	[INFO] 10.244.0.18:47881 - 22489 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000039806s
	[INFO] 10.244.0.18:47881 - 22031 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000055599s
	[INFO] 10.244.0.18:40699 - 3179 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000115452s
	[INFO] 10.244.0.18:40699 - 3006 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000159021s
	[INFO] 10.244.0.20:48001 - 3733 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00018328s
	[INFO] 10.244.0.20:38314 - 5264 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000187225s
	[INFO] 10.244.0.20:60206 - 3000 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011823s
	[INFO] 10.244.0.20:44722 - 36165 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00009309s
	[INFO] 10.244.0.20:35952 - 14694 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120408s
	[INFO] 10.244.0.20:46843 - 56153 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118744s
	[INFO] 10.244.0.20:38409 - 3935 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004710858s
	[INFO] 10.244.0.20:37022 - 25180 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.006202584s
	[INFO] 10.244.0.20:57701 - 9202 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004765676s
	[INFO] 10.244.0.20:34781 - 20197 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00628365s
	[INFO] 10.244.0.20:51977 - 38486 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004408159s
	[INFO] 10.244.0.20:53836 - 47143 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005625804s
	[INFO] 10.244.0.20:41624 - 31056 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001032431s
	[INFO] 10.244.0.20:48401 - 48285 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002722073s
	
	
	==> describe nodes <==
	Name:               addons-916029
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-916029
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=addons-916029
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T08_29_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-916029
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-916029"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 08:29:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-916029
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 08:31:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 08:31:08 +0000   Sat, 13 Dec 2025 08:29:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 08:31:08 +0000   Sat, 13 Dec 2025 08:29:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 08:31:08 +0000   Sat, 13 Dec 2025 08:29:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 08:31:08 +0000   Sat, 13 Dec 2025 08:30:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-916029
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                e183f2ea-5441-4130-a280-3a2146a78b75
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-5bdddb765-gtmzw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  gadget                      gadget-f6967                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  gcp-auth                    gcp-auth-78565c9fb4-trzh2                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-mh7x6                     100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         90s
	  kube-system                 amd-gpu-device-plugin-vwtp8                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 coredns-66bc5c9577-lp9sl                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     91s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 csi-hostpathplugin-btrm5                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 etcd-addons-916029                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         97s
	  kube-system                 kindnet-qpw8x                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      92s
	  kube-system                 kube-apiserver-addons-916029                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-controller-manager-addons-916029                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-proxy-kr7zc                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-scheduler-addons-916029                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 metrics-server-85b7d694d7-zrm5x                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         90s
	  kube-system                 nvidia-device-plugin-daemonset-ss6tf                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 registry-6b586f9694-xvfhz                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 registry-creds-764b6fb674-vj2wj                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 registry-proxy-cd6hw                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 snapshot-controller-7d9fbc56b8-4d65q                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 snapshot-controller-7d9fbc56b8-wfsd6                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  local-path-storage          helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  local-path-storage          local-path-provisioner-648f6765c9-kl88n                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-z5th7                               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node addons-916029 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node addons-916029 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x8 over 102s)  kubelet          Node addons-916029 status is now: NodeHasSufficientPID
	  Normal  Starting                 97s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  97s                  kubelet          Node addons-916029 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s                  kubelet          Node addons-916029 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s                  kubelet          Node addons-916029 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           92s                  node-controller  Node addons-916029 event: Registered Node addons-916029 in Controller
	  Normal  NodeReady                49s                  kubelet          Node addons-916029 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec13 08:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001870] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.381619] i8042: Warning: Keylock active
	[  +0.012691] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.482712] block sda: the capability attribute has been deprecated.
	[  +0.083084] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023653] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.640510] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90] <==
	{"level":"warn","ts":"2025-12-13T08:29:33.536197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.542426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.559674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.566878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.573683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.580647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.587525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.593705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.600114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.606899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.613252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.619455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.626147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.632054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.647464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.653785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.659855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:33.704712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:44.780673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:29:44.788069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:30:11.079696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:30:11.086234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:30:11.113220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T08:30:11.125306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56792","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T08:30:52.076066Z","caller":"traceutil/trace.go:172","msg":"trace[289190147] transaction","detail":"{read_only:false; response_revision:1192; number_of_response:1; }","duration":"114.785168ms","start":"2025-12-13T08:30:51.961262Z","end":"2025-12-13T08:30:52.076047Z","steps":["trace[289190147] 'process raft request'  (duration: 62.978384ms)","trace[289190147] 'compare'  (duration: 51.706949ms)"],"step_count":2}
	
	
	==> gcp-auth [550d4b374da5260a2999b860be3bc240d05ff59fd92ea2272f4a911eaf79e79a] <==
	2025/12/13 08:30:49 GCP Auth Webhook started!
	2025/12/13 08:31:03 Ready to marshal response ...
	2025/12/13 08:31:03 Ready to write response ...
	2025/12/13 08:31:03 Ready to marshal response ...
	2025/12/13 08:31:03 Ready to write response ...
	2025/12/13 08:31:03 Ready to marshal response ...
	2025/12/13 08:31:03 Ready to write response ...
	2025/12/13 08:31:12 Ready to marshal response ...
	2025/12/13 08:31:12 Ready to write response ...
	2025/12/13 08:31:12 Ready to marshal response ...
	2025/12/13 08:31:12 Ready to write response ...
	
	
	==> kernel <==
	 08:31:13 up 13 min,  0 user,  load average: 1.83, 0.90, 0.35
	Linux addons-916029 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d] <==
	I1213 08:29:43.553988       1 main.go:148] setting mtu 1500 for CNI 
	I1213 08:29:43.554043       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 08:29:43.554076       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T08:29:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 08:29:43.848901       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 08:29:43.848986       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 08:29:43.849023       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 08:29:43.849192       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 08:30:13.850397       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1213 08:30:13.850397       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1213 08:30:13.850389       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1213 08:30:13.850523       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1213 08:30:14.949873       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 08:30:14.949900       1 metrics.go:72] Registering metrics
	I1213 08:30:14.949959       1 controller.go:711] "Syncing nftables rules"
	I1213 08:30:23.849529       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 08:30:23.849570       1 main.go:301] handling current node
	I1213 08:30:33.849150       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 08:30:33.849180       1 main.go:301] handling current node
	I1213 08:30:43.849227       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 08:30:43.849270       1 main.go:301] handling current node
	I1213 08:30:53.849046       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 08:30:53.849084       1 main.go:301] handling current node
	I1213 08:31:03.848990       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 08:31:03.849039       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0] <==
	W1213 08:30:41.863457       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 08:30:41.863582       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1213 08:30:41.863763       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.149.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.149.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.149.29:443: connect: connection refused" logger="UnhandledError"
	E1213 08:30:41.869152       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.149.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.149.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.149.29:443: connect: connection refused" logger="UnhandledError"
	W1213 08:30:42.865414       1 handler_proxy.go:99] no RequestInfo found in the context
	W1213 08:30:42.865462       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 08:30:42.865518       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1213 08:30:42.865534       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1213 08:30:42.865535       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 08:30:42.866657       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1213 08:30:46.895307       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.149.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.149.29:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1213 08:30:46.895374       1 handler_proxy.go:99] no RequestInfo found in the context
	E1213 08:30:46.895405       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1213 08:30:46.905306       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1213 08:31:11.636158       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38160: use of closed network connection
	E1213 08:31:11.781330       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38198: use of closed network connection
	
	
	==> kube-controller-manager [1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955] <==
	I1213 08:29:41.066971       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 08:29:41.067015       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 08:29:41.067042       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 08:29:41.067017       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 08:29:41.067080       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 08:29:41.067080       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 08:29:41.067135       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 08:29:41.067285       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 08:29:41.070834       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 08:29:41.070840       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 08:29:41.070936       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 08:29:41.076184       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 08:29:41.081359       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1213 08:29:41.089841       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1213 08:29:43.449635       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1213 08:30:11.074774       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 08:30:11.074899       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1213 08:30:11.074935       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1213 08:30:11.097035       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1213 08:30:11.100231       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1213 08:30:11.175897       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 08:30:11.201251       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 08:30:26.023099       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1213 08:30:41.181382       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1213 08:30:41.208157       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535] <==
	I1213 08:29:43.424758       1 server_linux.go:53] "Using iptables proxy"
	I1213 08:29:43.589388       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 08:29:43.690372       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 08:29:43.690399       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1213 08:29:43.690507       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 08:29:43.712616       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 08:29:43.712679       1 server_linux.go:132] "Using iptables Proxier"
	I1213 08:29:43.719167       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 08:29:43.723857       1 server.go:527] "Version info" version="v1.34.2"
	I1213 08:29:43.723882       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 08:29:43.726163       1 config.go:200] "Starting service config controller"
	I1213 08:29:43.726172       1 config.go:106] "Starting endpoint slice config controller"
	I1213 08:29:43.726181       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 08:29:43.726181       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 08:29:43.726273       1 config.go:309] "Starting node config controller"
	I1213 08:29:43.726285       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 08:29:43.726436       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 08:29:43.726444       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 08:29:43.826332       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 08:29:43.826406       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 08:29:43.826441       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 08:29:43.826771       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0] <==
	E1213 08:29:34.093910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 08:29:34.093957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 08:29:34.093767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 08:29:34.096992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 08:29:34.097204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 08:29:34.097572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 08:29:34.097587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 08:29:34.098182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 08:29:34.098820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 08:29:34.098954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 08:29:34.098993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 08:29:34.099010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 08:29:34.099171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 08:29:34.099410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 08:29:34.099423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 08:29:35.006627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 08:29:35.075613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 08:29:35.082619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 08:29:35.115034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 08:29:35.126894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 08:29:35.156924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 08:29:35.157665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 08:29:35.169779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 08:29:35.201998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1213 08:29:35.590718       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 08:30:51 addons-916029 kubelet[1282]: I1213 08:30:51.087346    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfr7t\" (UniqueName: \"kubernetes.io/projected/f0308b68-98fb-43e3-a7f1-6d0191b2a0f2-kube-api-access-bfr7t\") pod \"f0308b68-98fb-43e3-a7f1-6d0191b2a0f2\" (UID: \"f0308b68-98fb-43e3-a7f1-6d0191b2a0f2\") "
	Dec 13 08:30:51 addons-916029 kubelet[1282]: I1213 08:30:51.090317    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0308b68-98fb-43e3-a7f1-6d0191b2a0f2-kube-api-access-bfr7t" (OuterVolumeSpecName: "kube-api-access-bfr7t") pod "f0308b68-98fb-43e3-a7f1-6d0191b2a0f2" (UID: "f0308b68-98fb-43e3-a7f1-6d0191b2a0f2"). InnerVolumeSpecName "kube-api-access-bfr7t". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 13 08:30:51 addons-916029 kubelet[1282]: I1213 08:30:51.188665    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bfr7t\" (UniqueName: \"kubernetes.io/projected/f0308b68-98fb-43e3-a7f1-6d0191b2a0f2-kube-api-access-bfr7t\") on node \"addons-916029\" DevicePath \"\""
	Dec 13 08:30:51 addons-916029 kubelet[1282]: I1213 08:30:51.610069    1282 scope.go:117] "RemoveContainer" containerID="828ae5a3c2ed2e52ed8b16f3d4c864d4d3896226e9cf08ef4978cdfeaf500afe"
	Dec 13 08:30:51 addons-916029 kubelet[1282]: I1213 08:30:51.892671    1282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c17f6ff7e2befd2b6e51802e9917e1f8bb79dff2a0f3de8c68c1da4667cf45db"
	Dec 13 08:30:52 addons-916029 kubelet[1282]: I1213 08:30:52.902086    1282 scope.go:117] "RemoveContainer" containerID="828ae5a3c2ed2e52ed8b16f3d4c864d4d3896226e9cf08ef4978cdfeaf500afe"
	Dec 13 08:30:52 addons-916029 kubelet[1282]: I1213 08:30:52.915881    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-f6967" podStartSLOduration=65.22647715 podStartE2EDuration="1m9.915860483s" podCreationTimestamp="2025-12-13 08:29:43 +0000 UTC" firstStartedPulling="2025-12-13 08:30:47.816049387 +0000 UTC m=+71.281991898" lastFinishedPulling="2025-12-13 08:30:52.505432721 +0000 UTC m=+75.971375231" observedRunningTime="2025-12-13 08:30:52.91536617 +0000 UTC m=+76.381308690" watchObservedRunningTime="2025-12-13 08:30:52.915860483 +0000 UTC m=+76.381803005"
	Dec 13 08:30:53 addons-916029 kubelet[1282]: I1213 08:30:53.661959    1282 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 13 08:30:53 addons-916029 kubelet[1282]: I1213 08:30:53.662012    1282 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 13 08:30:54 addons-916029 kubelet[1282]: I1213 08:30:54.007695    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9vdf\" (UniqueName: \"kubernetes.io/projected/83c5037f-c23c-4e26-bf59-04bbaaf2adf7-kube-api-access-q9vdf\") pod \"83c5037f-c23c-4e26-bf59-04bbaaf2adf7\" (UID: \"83c5037f-c23c-4e26-bf59-04bbaaf2adf7\") "
	Dec 13 08:30:54 addons-916029 kubelet[1282]: I1213 08:30:54.012236    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c5037f-c23c-4e26-bf59-04bbaaf2adf7-kube-api-access-q9vdf" (OuterVolumeSpecName: "kube-api-access-q9vdf") pod "83c5037f-c23c-4e26-bf59-04bbaaf2adf7" (UID: "83c5037f-c23c-4e26-bf59-04bbaaf2adf7"). InnerVolumeSpecName "kube-api-access-q9vdf". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 13 08:30:54 addons-916029 kubelet[1282]: I1213 08:30:54.108278    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q9vdf\" (UniqueName: \"kubernetes.io/projected/83c5037f-c23c-4e26-bf59-04bbaaf2adf7-kube-api-access-q9vdf\") on node \"addons-916029\" DevicePath \"\""
	Dec 13 08:30:54 addons-916029 kubelet[1282]: I1213 08:30:54.917437    1282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79c87d0ba134eb5afa852f499edad21db40836c3f88cbdf0c5c32f3b487411a8"
	Dec 13 08:30:55 addons-916029 kubelet[1282]: I1213 08:30:55.937352    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-btrm5" podStartSLOduration=0.962903367 podStartE2EDuration="31.937335091s" podCreationTimestamp="2025-12-13 08:30:24 +0000 UTC" firstStartedPulling="2025-12-13 08:30:24.739871739 +0000 UTC m=+48.205814239" lastFinishedPulling="2025-12-13 08:30:55.714303464 +0000 UTC m=+79.180245963" observedRunningTime="2025-12-13 08:30:55.936635925 +0000 UTC m=+79.402578478" watchObservedRunningTime="2025-12-13 08:30:55.937335091 +0000 UTC m=+79.403277612"
	Dec 13 08:30:56 addons-916029 kubelet[1282]: E1213 08:30:56.222827    1282 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 13 08:30:56 addons-916029 kubelet[1282]: E1213 08:30:56.222896    1282 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d277eaca-8fc8-4604-81bc-7e6c4ab2feeb-gcr-creds podName:d277eaca-8fc8-4604-81bc-7e6c4ab2feeb nodeName:}" failed. No retries permitted until 2025-12-13 08:31:28.222882728 +0000 UTC m=+111.688825238 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/d277eaca-8fc8-4604-81bc-7e6c4ab2feeb-gcr-creds") pod "registry-creds-764b6fb674-vj2wj" (UID: "d277eaca-8fc8-4604-81bc-7e6c4ab2feeb") : secret "registry-creds-gcr" not found
	Dec 13 08:31:00 addons-916029 kubelet[1282]: I1213 08:31:00.961170    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-mh7x6" podStartSLOduration=74.120594141 podStartE2EDuration="1m17.961149893s" podCreationTimestamp="2025-12-13 08:29:43 +0000 UTC" firstStartedPulling="2025-12-13 08:30:56.444389829 +0000 UTC m=+79.910332333" lastFinishedPulling="2025-12-13 08:31:00.28494557 +0000 UTC m=+83.750888085" observedRunningTime="2025-12-13 08:31:00.960243443 +0000 UTC m=+84.426185982" watchObservedRunningTime="2025-12-13 08:31:00.961149893 +0000 UTC m=+84.427092414"
	Dec 13 08:31:03 addons-916029 kubelet[1282]: I1213 08:31:03.583436    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsg7m\" (UniqueName: \"kubernetes.io/projected/19e93556-7441-4a02-80d8-8b2015579721-kube-api-access-qsg7m\") pod \"busybox\" (UID: \"19e93556-7441-4a02-80d8-8b2015579721\") " pod="default/busybox"
	Dec 13 08:31:03 addons-916029 kubelet[1282]: I1213 08:31:03.583568    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/19e93556-7441-4a02-80d8-8b2015579721-gcp-creds\") pod \"busybox\" (UID: \"19e93556-7441-4a02-80d8-8b2015579721\") " pod="default/busybox"
	Dec 13 08:31:05 addons-916029 kubelet[1282]: I1213 08:31:05.979518    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.295966344 podStartE2EDuration="2.979501908s" podCreationTimestamp="2025-12-13 08:31:03 +0000 UTC" firstStartedPulling="2025-12-13 08:31:03.820570419 +0000 UTC m=+87.286512936" lastFinishedPulling="2025-12-13 08:31:05.504105998 +0000 UTC m=+88.970048500" observedRunningTime="2025-12-13 08:31:05.978534287 +0000 UTC m=+89.444476823" watchObservedRunningTime="2025-12-13 08:31:05.979501908 +0000 UTC m=+89.445444422"
	Dec 13 08:31:10 addons-916029 kubelet[1282]: I1213 08:31:10.612732    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f159034-b519-4ac7-a8c4-dc7552fc787d" path="/var/lib/kubelet/pods/7f159034-b519-4ac7-a8c4-dc7552fc787d/volumes"
	Dec 13 08:31:12 addons-916029 kubelet[1282]: I1213 08:31:12.349969    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/dbf0a4f8-348d-473b-8543-2ebad9c8ad01-data\") pod \"helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188\" (UID: \"dbf0a4f8-348d-473b-8543-2ebad9c8ad01\") " pod="local-path-storage/helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188"
	Dec 13 08:31:12 addons-916029 kubelet[1282]: I1213 08:31:12.350075    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/dbf0a4f8-348d-473b-8543-2ebad9c8ad01-gcp-creds\") pod \"helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188\" (UID: \"dbf0a4f8-348d-473b-8543-2ebad9c8ad01\") " pod="local-path-storage/helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188"
	Dec 13 08:31:12 addons-916029 kubelet[1282]: I1213 08:31:12.350255    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flx2q\" (UniqueName: \"kubernetes.io/projected/dbf0a4f8-348d-473b-8543-2ebad9c8ad01-kube-api-access-flx2q\") pod \"helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188\" (UID: \"dbf0a4f8-348d-473b-8543-2ebad9c8ad01\") " pod="local-path-storage/helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188"
	Dec 13 08:31:12 addons-916029 kubelet[1282]: I1213 08:31:12.350287    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/dbf0a4f8-348d-473b-8543-2ebad9c8ad01-script\") pod \"helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188\" (UID: \"dbf0a4f8-348d-473b-8543-2ebad9c8ad01\") " pod="local-path-storage/helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188"
	
	
	==> storage-provisioner [f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c] <==
	W1213 08:30:48.941865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:30:50.946720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:30:50.954634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:30:52.957472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:30:52.960976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:30:54.963643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:30:54.966785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:30:56.970621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:30:56.978332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:30:58.981567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:30:58.986391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:31:00.988708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:31:00.991835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:31:03.003423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:31:03.010222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:31:05.033836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:31:05.122971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:31:07.125372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:31:07.130351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:31:09.132936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:31:09.138417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:31:11.141392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:31:11.145106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:31:13.147849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 08:31:13.154464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-916029 -n addons-916029
helpers_test.go:270: (dbg) Run:  kubectl --context addons-916029 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: test-local-path gcp-auth-certs-patch-522qt ingress-nginx-admission-create-k5s7l ingress-nginx-admission-patch-hwg9m registry-creds-764b6fb674-vj2wj helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-916029 describe pod test-local-path gcp-auth-certs-patch-522qt ingress-nginx-admission-create-k5s7l ingress-nginx-admission-patch-hwg9m registry-creds-764b6fb674-vj2wj helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-916029 describe pod test-local-path gcp-auth-certs-patch-522qt ingress-nginx-admission-create-k5s7l ingress-nginx-admission-patch-hwg9m registry-creds-764b6fb674-vj2wj helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188: exit status 1 (70.011678ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s2xsz (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-s2xsz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-522qt" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-k5s7l" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-hwg9m" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-vj2wj" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-916029 describe pod test-local-path gcp-auth-certs-patch-522qt ingress-nginx-admission-create-k5s7l ingress-nginx-admission-patch-hwg9m registry-creds-764b6fb674-vj2wj helper-pod-create-pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-916029 addons disable headlamp --alsologtostderr -v=1: exit status 11 (243.506072ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:31:14.397653   20602 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:31:14.397926   20602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:14.397942   20602 out.go:374] Setting ErrFile to fd 2...
	I1213 08:31:14.397946   20602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:14.398145   20602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:31:14.398370   20602 mustload.go:66] Loading cluster: addons-916029
	I1213 08:31:14.398885   20602 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:14.398906   20602 addons.go:622] checking whether the cluster is paused
	I1213 08:31:14.398993   20602 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:14.399005   20602 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:31:14.399373   20602 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:31:14.417454   20602 ssh_runner.go:195] Run: systemctl --version
	I1213 08:31:14.417513   20602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:31:14.435465   20602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:31:14.530071   20602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:31:14.530148   20602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:31:14.558751   20602 cri.go:89] found id: "4bb25dd81c0541c58b998a5dd1e1d2fe7a443642ac20fb592394af3d836737d4"
	I1213 08:31:14.558774   20602 cri.go:89] found id: "698d7bca1539fb40aff53c0c6284f7f24e87951c737357b6c6498f4a0c2ccad9"
	I1213 08:31:14.558781   20602 cri.go:89] found id: "e4fc393aefc08283b1783fbbea7888ea26a275bfe57c0e05cd0d336b6ed4966e"
	I1213 08:31:14.558786   20602 cri.go:89] found id: "b4d11fbe011a5a6679dbdec4c460349ab7c99e5017e8334bb7893843af12fd12"
	I1213 08:31:14.558791   20602 cri.go:89] found id: "e74c5d67f2c68576df4d7df9e93a1012a053c238c27d92ad2331b9fa54a041cc"
	I1213 08:31:14.558796   20602 cri.go:89] found id: "9af098f389792a2ca60bfe746b36b7037affa3720d9bb625614e600d0b53f777"
	I1213 08:31:14.558800   20602 cri.go:89] found id: "64bbaac2488ef0e0b7e28fac6e2f2a932fb36905093e3287b2b5d0e89e837414"
	I1213 08:31:14.558804   20602 cri.go:89] found id: "8c199d50a7ce02f37c9f7ca0c310d99d98be3926d103644855fa912e30d253c8"
	I1213 08:31:14.558809   20602 cri.go:89] found id: "fde0db8ec43cfda390edee03eb437eb6a2b1d282c69405847fc4c41892df3581"
	I1213 08:31:14.558840   20602 cri.go:89] found id: "fe444a659a1b400f330d2f2a1de1e6b38fc3f00a0515e54e7d432c4581d6f457"
	I1213 08:31:14.558849   20602 cri.go:89] found id: "fa1c6645c83e0942a6fa2b033162596e626df4025e61894457e512273a89358b"
	I1213 08:31:14.558854   20602 cri.go:89] found id: "ad874af797e833520e1981601d5efeefcf313d735f57b7308ad40bb2d8ccb664"
	I1213 08:31:14.558858   20602 cri.go:89] found id: "7ca9d8ef0232224f51f796a5a42b289083e38e83786d57e118ca410ae35b9a12"
	I1213 08:31:14.558862   20602 cri.go:89] found id: "2932c3775c90ead574169f47ba6b14c61f79b3b703cc7d808599349d052c6874"
	I1213 08:31:14.558865   20602 cri.go:89] found id: "66396849dd6c6c91e526433d374f5d0d1144295399e1fad4c6d195ee2f8c97dd"
	I1213 08:31:14.558872   20602 cri.go:89] found id: "3c9c834fe19b9b803891c5f93368f927b1f2634d843c5c238f3ff7eafcc9af21"
	I1213 08:31:14.558877   20602 cri.go:89] found id: "fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376"
	I1213 08:31:14.558882   20602 cri.go:89] found id: "f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c"
	I1213 08:31:14.558885   20602 cri.go:89] found id: "dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535"
	I1213 08:31:14.558887   20602 cri.go:89] found id: "ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d"
	I1213 08:31:14.558890   20602 cri.go:89] found id: "8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0"
	I1213 08:31:14.558898   20602 cri.go:89] found id: "1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955"
	I1213 08:31:14.558903   20602 cri.go:89] found id: "ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0"
	I1213 08:31:14.558907   20602 cri.go:89] found id: "ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90"
	I1213 08:31:14.558911   20602 cri.go:89] found id: ""
	I1213 08:31:14.558959   20602 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 08:31:14.573056   20602 out.go:203] 
	W1213 08:31:14.574417   20602 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 08:31:14.574436   20602 out.go:285] * 
	* 
	W1213 08:31:14.577308   20602 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:31:14.578579   20602 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-916029 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.55s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-gtmzw" [9de80437-5e51-4ff3-848a-377f7f11905e] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003814721s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-916029 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (256.945548ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:31:19.647732   20988 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:31:19.647888   20988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:19.647898   20988 out.go:374] Setting ErrFile to fd 2...
	I1213 08:31:19.647902   20988 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:19.648092   20988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:31:19.648602   20988 mustload.go:66] Loading cluster: addons-916029
	I1213 08:31:19.649800   20988 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:19.649825   20988 addons.go:622] checking whether the cluster is paused
	I1213 08:31:19.649921   20988 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:19.649954   20988 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:31:19.650312   20988 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:31:19.668165   20988 ssh_runner.go:195] Run: systemctl --version
	I1213 08:31:19.668215   20988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:31:19.688494   20988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:31:19.786396   20988 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:31:19.786460   20988 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:31:19.821296   20988 cri.go:89] found id: "4bb25dd81c0541c58b998a5dd1e1d2fe7a443642ac20fb592394af3d836737d4"
	I1213 08:31:19.821321   20988 cri.go:89] found id: "698d7bca1539fb40aff53c0c6284f7f24e87951c737357b6c6498f4a0c2ccad9"
	I1213 08:31:19.821325   20988 cri.go:89] found id: "e4fc393aefc08283b1783fbbea7888ea26a275bfe57c0e05cd0d336b6ed4966e"
	I1213 08:31:19.821328   20988 cri.go:89] found id: "b4d11fbe011a5a6679dbdec4c460349ab7c99e5017e8334bb7893843af12fd12"
	I1213 08:31:19.821331   20988 cri.go:89] found id: "e74c5d67f2c68576df4d7df9e93a1012a053c238c27d92ad2331b9fa54a041cc"
	I1213 08:31:19.821345   20988 cri.go:89] found id: "9af098f389792a2ca60bfe746b36b7037affa3720d9bb625614e600d0b53f777"
	I1213 08:31:19.821348   20988 cri.go:89] found id: "64bbaac2488ef0e0b7e28fac6e2f2a932fb36905093e3287b2b5d0e89e837414"
	I1213 08:31:19.821351   20988 cri.go:89] found id: "8c199d50a7ce02f37c9f7ca0c310d99d98be3926d103644855fa912e30d253c8"
	I1213 08:31:19.821356   20988 cri.go:89] found id: "fde0db8ec43cfda390edee03eb437eb6a2b1d282c69405847fc4c41892df3581"
	I1213 08:31:19.821365   20988 cri.go:89] found id: "fe444a659a1b400f330d2f2a1de1e6b38fc3f00a0515e54e7d432c4581d6f457"
	I1213 08:31:19.821373   20988 cri.go:89] found id: "fa1c6645c83e0942a6fa2b033162596e626df4025e61894457e512273a89358b"
	I1213 08:31:19.821377   20988 cri.go:89] found id: "ad874af797e833520e1981601d5efeefcf313d735f57b7308ad40bb2d8ccb664"
	I1213 08:31:19.821382   20988 cri.go:89] found id: "7ca9d8ef0232224f51f796a5a42b289083e38e83786d57e118ca410ae35b9a12"
	I1213 08:31:19.821386   20988 cri.go:89] found id: "2932c3775c90ead574169f47ba6b14c61f79b3b703cc7d808599349d052c6874"
	I1213 08:31:19.821391   20988 cri.go:89] found id: "66396849dd6c6c91e526433d374f5d0d1144295399e1fad4c6d195ee2f8c97dd"
	I1213 08:31:19.821398   20988 cri.go:89] found id: "3c9c834fe19b9b803891c5f93368f927b1f2634d843c5c238f3ff7eafcc9af21"
	I1213 08:31:19.821406   20988 cri.go:89] found id: "fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376"
	I1213 08:31:19.821412   20988 cri.go:89] found id: "f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c"
	I1213 08:31:19.821417   20988 cri.go:89] found id: "dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535"
	I1213 08:31:19.821421   20988 cri.go:89] found id: "ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d"
	I1213 08:31:19.821429   20988 cri.go:89] found id: "8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0"
	I1213 08:31:19.821434   20988 cri.go:89] found id: "1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955"
	I1213 08:31:19.821437   20988 cri.go:89] found id: "ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0"
	I1213 08:31:19.821440   20988 cri.go:89] found id: "ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90"
	I1213 08:31:19.821443   20988 cri.go:89] found id: ""
	I1213 08:31:19.821506   20988 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 08:31:19.838374   20988 out.go:203] 
	W1213 08:31:19.839778   20988 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 08:31:19.839801   20988 out.go:285] * 
	* 
	W1213 08:31:19.842815   20988 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:31:19.844279   20988 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-916029 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-916029 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-916029 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-916029 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [a7e77ea6-54c6-40f2-8db0-e38f0caee57f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [a7e77ea6-54c6-40f2-8db0-e38f0caee57f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [a7e77ea6-54c6-40f2-8db0-e38f0caee57f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003284184s
addons_test.go:969: (dbg) Run:  kubectl --context addons-916029 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 ssh "cat /opt/local-path-provisioner/pvc-4e6967ca-6ed3-4756-815f-5fe7853bd188_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-916029 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-916029 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-916029 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (248.815937ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:31:19.959768   21079 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:31:19.960024   21079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:19.960033   21079 out.go:374] Setting ErrFile to fd 2...
	I1213 08:31:19.960037   21079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:19.960232   21079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:31:19.960500   21079 mustload.go:66] Loading cluster: addons-916029
	I1213 08:31:19.960803   21079 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:19.960822   21079 addons.go:622] checking whether the cluster is paused
	I1213 08:31:19.960900   21079 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:19.960912   21079 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:31:19.961258   21079 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:31:19.979966   21079 ssh_runner.go:195] Run: systemctl --version
	I1213 08:31:19.980036   21079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:31:19.998502   21079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:31:20.095184   21079 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:31:20.095254   21079 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:31:20.124363   21079 cri.go:89] found id: "4bb25dd81c0541c58b998a5dd1e1d2fe7a443642ac20fb592394af3d836737d4"
	I1213 08:31:20.124391   21079 cri.go:89] found id: "698d7bca1539fb40aff53c0c6284f7f24e87951c737357b6c6498f4a0c2ccad9"
	I1213 08:31:20.124395   21079 cri.go:89] found id: "e4fc393aefc08283b1783fbbea7888ea26a275bfe57c0e05cd0d336b6ed4966e"
	I1213 08:31:20.124398   21079 cri.go:89] found id: "b4d11fbe011a5a6679dbdec4c460349ab7c99e5017e8334bb7893843af12fd12"
	I1213 08:31:20.124401   21079 cri.go:89] found id: "e74c5d67f2c68576df4d7df9e93a1012a053c238c27d92ad2331b9fa54a041cc"
	I1213 08:31:20.124406   21079 cri.go:89] found id: "9af098f389792a2ca60bfe746b36b7037affa3720d9bb625614e600d0b53f777"
	I1213 08:31:20.124409   21079 cri.go:89] found id: "64bbaac2488ef0e0b7e28fac6e2f2a932fb36905093e3287b2b5d0e89e837414"
	I1213 08:31:20.124411   21079 cri.go:89] found id: "8c199d50a7ce02f37c9f7ca0c310d99d98be3926d103644855fa912e30d253c8"
	I1213 08:31:20.124414   21079 cri.go:89] found id: "fde0db8ec43cfda390edee03eb437eb6a2b1d282c69405847fc4c41892df3581"
	I1213 08:31:20.124431   21079 cri.go:89] found id: "fe444a659a1b400f330d2f2a1de1e6b38fc3f00a0515e54e7d432c4581d6f457"
	I1213 08:31:20.124434   21079 cri.go:89] found id: "fa1c6645c83e0942a6fa2b033162596e626df4025e61894457e512273a89358b"
	I1213 08:31:20.124437   21079 cri.go:89] found id: "ad874af797e833520e1981601d5efeefcf313d735f57b7308ad40bb2d8ccb664"
	I1213 08:31:20.124440   21079 cri.go:89] found id: "7ca9d8ef0232224f51f796a5a42b289083e38e83786d57e118ca410ae35b9a12"
	I1213 08:31:20.124447   21079 cri.go:89] found id: "2932c3775c90ead574169f47ba6b14c61f79b3b703cc7d808599349d052c6874"
	I1213 08:31:20.124450   21079 cri.go:89] found id: "66396849dd6c6c91e526433d374f5d0d1144295399e1fad4c6d195ee2f8c97dd"
	I1213 08:31:20.124462   21079 cri.go:89] found id: "3c9c834fe19b9b803891c5f93368f927b1f2634d843c5c238f3ff7eafcc9af21"
	I1213 08:31:20.124469   21079 cri.go:89] found id: "fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376"
	I1213 08:31:20.124473   21079 cri.go:89] found id: "f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c"
	I1213 08:31:20.124476   21079 cri.go:89] found id: "dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535"
	I1213 08:31:20.124479   21079 cri.go:89] found id: "ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d"
	I1213 08:31:20.124481   21079 cri.go:89] found id: "8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0"
	I1213 08:31:20.124500   21079 cri.go:89] found id: "1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955"
	I1213 08:31:20.124509   21079 cri.go:89] found id: "ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0"
	I1213 08:31:20.124512   21079 cri.go:89] found id: "ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90"
	I1213 08:31:20.124515   21079 cri.go:89] found id: ""
	I1213 08:31:20.124563   21079 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 08:31:20.140577   21079 out.go:203] 
	W1213 08:31:20.141958   21079 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 08:31:20.141999   21079 out.go:285] * 
	* 
	W1213 08:31:20.145339   21079 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:31:20.146743   21079 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-916029 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-ss6tf" [31114449-a40e-4a7c-a76a-da5a506f3892] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003190689s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-916029 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (251.372695ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:31:17.092117   20743 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:31:17.092482   20743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:17.092510   20743 out.go:374] Setting ErrFile to fd 2...
	I1213 08:31:17.092515   20743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:17.092913   20743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:31:17.093239   20743 mustload.go:66] Loading cluster: addons-916029
	I1213 08:31:17.093620   20743 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:17.093644   20743 addons.go:622] checking whether the cluster is paused
	I1213 08:31:17.093751   20743 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:17.093774   20743 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:31:17.094320   20743 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:31:17.115653   20743 ssh_runner.go:195] Run: systemctl --version
	I1213 08:31:17.115716   20743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:31:17.136757   20743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:31:17.232083   20743 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:31:17.232159   20743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:31:17.260724   20743 cri.go:89] found id: "4bb25dd81c0541c58b998a5dd1e1d2fe7a443642ac20fb592394af3d836737d4"
	I1213 08:31:17.260742   20743 cri.go:89] found id: "698d7bca1539fb40aff53c0c6284f7f24e87951c737357b6c6498f4a0c2ccad9"
	I1213 08:31:17.260746   20743 cri.go:89] found id: "e4fc393aefc08283b1783fbbea7888ea26a275bfe57c0e05cd0d336b6ed4966e"
	I1213 08:31:17.260749   20743 cri.go:89] found id: "b4d11fbe011a5a6679dbdec4c460349ab7c99e5017e8334bb7893843af12fd12"
	I1213 08:31:17.260752   20743 cri.go:89] found id: "e74c5d67f2c68576df4d7df9e93a1012a053c238c27d92ad2331b9fa54a041cc"
	I1213 08:31:17.260755   20743 cri.go:89] found id: "9af098f389792a2ca60bfe746b36b7037affa3720d9bb625614e600d0b53f777"
	I1213 08:31:17.260757   20743 cri.go:89] found id: "64bbaac2488ef0e0b7e28fac6e2f2a932fb36905093e3287b2b5d0e89e837414"
	I1213 08:31:17.260760   20743 cri.go:89] found id: "8c199d50a7ce02f37c9f7ca0c310d99d98be3926d103644855fa912e30d253c8"
	I1213 08:31:17.260763   20743 cri.go:89] found id: "fde0db8ec43cfda390edee03eb437eb6a2b1d282c69405847fc4c41892df3581"
	I1213 08:31:17.260771   20743 cri.go:89] found id: "fe444a659a1b400f330d2f2a1de1e6b38fc3f00a0515e54e7d432c4581d6f457"
	I1213 08:31:17.260774   20743 cri.go:89] found id: "fa1c6645c83e0942a6fa2b033162596e626df4025e61894457e512273a89358b"
	I1213 08:31:17.260777   20743 cri.go:89] found id: "ad874af797e833520e1981601d5efeefcf313d735f57b7308ad40bb2d8ccb664"
	I1213 08:31:17.260780   20743 cri.go:89] found id: "7ca9d8ef0232224f51f796a5a42b289083e38e83786d57e118ca410ae35b9a12"
	I1213 08:31:17.260783   20743 cri.go:89] found id: "2932c3775c90ead574169f47ba6b14c61f79b3b703cc7d808599349d052c6874"
	I1213 08:31:17.260786   20743 cri.go:89] found id: "66396849dd6c6c91e526433d374f5d0d1144295399e1fad4c6d195ee2f8c97dd"
	I1213 08:31:17.260796   20743 cri.go:89] found id: "3c9c834fe19b9b803891c5f93368f927b1f2634d843c5c238f3ff7eafcc9af21"
	I1213 08:31:17.260803   20743 cri.go:89] found id: "fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376"
	I1213 08:31:17.260808   20743 cri.go:89] found id: "f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c"
	I1213 08:31:17.260810   20743 cri.go:89] found id: "dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535"
	I1213 08:31:17.260813   20743 cri.go:89] found id: "ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d"
	I1213 08:31:17.260816   20743 cri.go:89] found id: "8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0"
	I1213 08:31:17.260819   20743 cri.go:89] found id: "1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955"
	I1213 08:31:17.260821   20743 cri.go:89] found id: "ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0"
	I1213 08:31:17.260824   20743 cri.go:89] found id: "ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90"
	I1213 08:31:17.260827   20743 cri.go:89] found id: ""
	I1213 08:31:17.260864   20743 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 08:31:17.274976   20743 out.go:203] 
	W1213 08:31:17.276278   20743 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 08:31:17.276295   20743 out.go:285] * 
	* 
	W1213 08:31:17.279173   20743 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:31:17.280662   20743 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-916029 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-z5th7" [ad9e912c-b442-4dd3-a509-60bc7df355e0] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003653792s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-916029 addons disable yakd --alsologtostderr -v=1: exit status 11 (244.880381ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:31:24.908051   21408 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:31:24.908202   21408 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:24.908214   21408 out.go:374] Setting ErrFile to fd 2...
	I1213 08:31:24.908218   21408 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:24.908456   21408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:31:24.908755   21408 mustload.go:66] Loading cluster: addons-916029
	I1213 08:31:24.909095   21408 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:24.909114   21408 addons.go:622] checking whether the cluster is paused
	I1213 08:31:24.909200   21408 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:24.909214   21408 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:31:24.909622   21408 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:31:24.927034   21408 ssh_runner.go:195] Run: systemctl --version
	I1213 08:31:24.927095   21408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:31:24.945798   21408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:31:25.040131   21408 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:31:25.040225   21408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:31:25.069420   21408 cri.go:89] found id: "4bb25dd81c0541c58b998a5dd1e1d2fe7a443642ac20fb592394af3d836737d4"
	I1213 08:31:25.069463   21408 cri.go:89] found id: "698d7bca1539fb40aff53c0c6284f7f24e87951c737357b6c6498f4a0c2ccad9"
	I1213 08:31:25.069470   21408 cri.go:89] found id: "e4fc393aefc08283b1783fbbea7888ea26a275bfe57c0e05cd0d336b6ed4966e"
	I1213 08:31:25.069475   21408 cri.go:89] found id: "b4d11fbe011a5a6679dbdec4c460349ab7c99e5017e8334bb7893843af12fd12"
	I1213 08:31:25.069479   21408 cri.go:89] found id: "e74c5d67f2c68576df4d7df9e93a1012a053c238c27d92ad2331b9fa54a041cc"
	I1213 08:31:25.069497   21408 cri.go:89] found id: "9af098f389792a2ca60bfe746b36b7037affa3720d9bb625614e600d0b53f777"
	I1213 08:31:25.069503   21408 cri.go:89] found id: "64bbaac2488ef0e0b7e28fac6e2f2a932fb36905093e3287b2b5d0e89e837414"
	I1213 08:31:25.069508   21408 cri.go:89] found id: "8c199d50a7ce02f37c9f7ca0c310d99d98be3926d103644855fa912e30d253c8"
	I1213 08:31:25.069513   21408 cri.go:89] found id: "fde0db8ec43cfda390edee03eb437eb6a2b1d282c69405847fc4c41892df3581"
	I1213 08:31:25.069522   21408 cri.go:89] found id: "fe444a659a1b400f330d2f2a1de1e6b38fc3f00a0515e54e7d432c4581d6f457"
	I1213 08:31:25.069529   21408 cri.go:89] found id: "fa1c6645c83e0942a6fa2b033162596e626df4025e61894457e512273a89358b"
	I1213 08:31:25.069534   21408 cri.go:89] found id: "ad874af797e833520e1981601d5efeefcf313d735f57b7308ad40bb2d8ccb664"
	I1213 08:31:25.069539   21408 cri.go:89] found id: "7ca9d8ef0232224f51f796a5a42b289083e38e83786d57e118ca410ae35b9a12"
	I1213 08:31:25.069546   21408 cri.go:89] found id: "2932c3775c90ead574169f47ba6b14c61f79b3b703cc7d808599349d052c6874"
	I1213 08:31:25.069551   21408 cri.go:89] found id: "66396849dd6c6c91e526433d374f5d0d1144295399e1fad4c6d195ee2f8c97dd"
	I1213 08:31:25.069562   21408 cri.go:89] found id: "3c9c834fe19b9b803891c5f93368f927b1f2634d843c5c238f3ff7eafcc9af21"
	I1213 08:31:25.069569   21408 cri.go:89] found id: "fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376"
	I1213 08:31:25.069575   21408 cri.go:89] found id: "f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c"
	I1213 08:31:25.069578   21408 cri.go:89] found id: "dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535"
	I1213 08:31:25.069582   21408 cri.go:89] found id: "ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d"
	I1213 08:31:25.069589   21408 cri.go:89] found id: "8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0"
	I1213 08:31:25.069594   21408 cri.go:89] found id: "1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955"
	I1213 08:31:25.069598   21408 cri.go:89] found id: "ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0"
	I1213 08:31:25.069602   21408 cri.go:89] found id: "ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90"
	I1213 08:31:25.069606   21408 cri.go:89] found id: ""
	I1213 08:31:25.069664   21408 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 08:31:25.084317   21408 out.go:203] 
	W1213 08:31:25.085749   21408 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 08:31:25.085779   21408 out.go:285] * 
	* 
	W1213 08:31:25.091256   21408 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:31:25.093206   21408 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-916029 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-vwtp8" [9c59b49f-4ccd-41d7-a843-2e0044c03209] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003547065s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-916029 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-916029 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (242.865649ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:31:22.344841   21211 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:31:22.345104   21211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:22.345113   21211 out.go:374] Setting ErrFile to fd 2...
	I1213 08:31:22.345117   21211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:31:22.345314   21211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:31:22.345569   21211 mustload.go:66] Loading cluster: addons-916029
	I1213 08:31:22.345932   21211 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:22.345957   21211 addons.go:622] checking whether the cluster is paused
	I1213 08:31:22.346039   21211 config.go:182] Loaded profile config "addons-916029": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:31:22.346050   21211 host.go:66] Checking if "addons-916029" exists ...
	I1213 08:31:22.346404   21211 cli_runner.go:164] Run: docker container inspect addons-916029 --format={{.State.Status}}
	I1213 08:31:22.363981   21211 ssh_runner.go:195] Run: systemctl --version
	I1213 08:31:22.364172   21211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916029
	I1213 08:31:22.382078   21211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/addons-916029/id_rsa Username:docker}
	I1213 08:31:22.477537   21211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 08:31:22.477634   21211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 08:31:22.508661   21211 cri.go:89] found id: "4bb25dd81c0541c58b998a5dd1e1d2fe7a443642ac20fb592394af3d836737d4"
	I1213 08:31:22.508687   21211 cri.go:89] found id: "698d7bca1539fb40aff53c0c6284f7f24e87951c737357b6c6498f4a0c2ccad9"
	I1213 08:31:22.508693   21211 cri.go:89] found id: "e4fc393aefc08283b1783fbbea7888ea26a275bfe57c0e05cd0d336b6ed4966e"
	I1213 08:31:22.508699   21211 cri.go:89] found id: "b4d11fbe011a5a6679dbdec4c460349ab7c99e5017e8334bb7893843af12fd12"
	I1213 08:31:22.508703   21211 cri.go:89] found id: "e74c5d67f2c68576df4d7df9e93a1012a053c238c27d92ad2331b9fa54a041cc"
	I1213 08:31:22.508725   21211 cri.go:89] found id: "9af098f389792a2ca60bfe746b36b7037affa3720d9bb625614e600d0b53f777"
	I1213 08:31:22.508734   21211 cri.go:89] found id: "64bbaac2488ef0e0b7e28fac6e2f2a932fb36905093e3287b2b5d0e89e837414"
	I1213 08:31:22.508737   21211 cri.go:89] found id: "8c199d50a7ce02f37c9f7ca0c310d99d98be3926d103644855fa912e30d253c8"
	I1213 08:31:22.508739   21211 cri.go:89] found id: "fde0db8ec43cfda390edee03eb437eb6a2b1d282c69405847fc4c41892df3581"
	I1213 08:31:22.508745   21211 cri.go:89] found id: "fe444a659a1b400f330d2f2a1de1e6b38fc3f00a0515e54e7d432c4581d6f457"
	I1213 08:31:22.508751   21211 cri.go:89] found id: "fa1c6645c83e0942a6fa2b033162596e626df4025e61894457e512273a89358b"
	I1213 08:31:22.508755   21211 cri.go:89] found id: "ad874af797e833520e1981601d5efeefcf313d735f57b7308ad40bb2d8ccb664"
	I1213 08:31:22.508763   21211 cri.go:89] found id: "7ca9d8ef0232224f51f796a5a42b289083e38e83786d57e118ca410ae35b9a12"
	I1213 08:31:22.508769   21211 cri.go:89] found id: "2932c3775c90ead574169f47ba6b14c61f79b3b703cc7d808599349d052c6874"
	I1213 08:31:22.508777   21211 cri.go:89] found id: "66396849dd6c6c91e526433d374f5d0d1144295399e1fad4c6d195ee2f8c97dd"
	I1213 08:31:22.508785   21211 cri.go:89] found id: "3c9c834fe19b9b803891c5f93368f927b1f2634d843c5c238f3ff7eafcc9af21"
	I1213 08:31:22.508793   21211 cri.go:89] found id: "fe903b8244ca75ad59c2ba6d6627a47dbddbdb00c0b79521bed720495582e376"
	I1213 08:31:22.508799   21211 cri.go:89] found id: "f0ce98858d71b16674df75112d0cc8dea3187db9e5b6b9a26e34b985658d800c"
	I1213 08:31:22.508803   21211 cri.go:89] found id: "dc46446aa2f04cafd3c831a950b248003c413a344be8ef02be99bba788a4e535"
	I1213 08:31:22.508808   21211 cri.go:89] found id: "ee8f73e803fabda21274780e7ea0c4ee6bc062dd437d42b7e1b12ffdc7ab090d"
	I1213 08:31:22.508812   21211 cri.go:89] found id: "8c1744d0402c3b4c435779f8ad16604c3fe35760f5ed66187e6596daded501e0"
	I1213 08:31:22.508820   21211 cri.go:89] found id: "1814222a5735c1fdc7750cb1294512b66445dcd3907bf3c929744584c51b0955"
	I1213 08:31:22.508825   21211 cri.go:89] found id: "ff2d7aaca1ac9a55887bf0c667c1babda7c3e061cbe10f172a12e7c6eb050cd0"
	I1213 08:31:22.508831   21211 cri.go:89] found id: "ea0dc0efd19f50cc77a75c095d9c2a3b6079baa6c3be6acf2e1849d40c691b90"
	I1213 08:31:22.508853   21211 cri.go:89] found id: ""
	I1213 08:31:22.508903   21211 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 08:31:22.523063   21211 out.go:203] 
	W1213 08:31:22.524388   21211 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:31:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 08:31:22.524409   21211 out.go:285] * 
	* 
	W1213 08:31:22.527369   21211 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 08:31:22.528722   21211 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-916029 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (4.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image load --daemon kicbase/echo-server:functional-331564 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-331564 image load --daemon kicbase/echo-server:functional-331564 --alsologtostderr: (2.597242105s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-331564 image ls: (2.27373012s)
functional_test.go:461: expected "kicbase/echo-server:functional-331564" to be loaded into minikube but the image is not there
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (4.87s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.18s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-095899 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-095899 --output=json --user=testUser: exit status 80 (2.183483886s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"384ef862-dabb-45f4-ad7d-a56b7d471618","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-095899 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"0b87ec43-19cc-4dc7-959a-1fc52fe49816","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-13T08:49:52Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"c75f2531-6021-4d4a-82f4-6f5ae15d375f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-095899 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.18s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.18s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-095899 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-095899 --output=json --user=testUser: exit status 80 (2.178254831s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a84a5180-4328-4060-8b16-b0c95daf384d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-095899 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"2507fe9d-c6ce-4274-8736-6f0e897e5772","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-13T08:49:55Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"ce422810-27d5-4ade-9075-ab56562dcdad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-095899 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.18s)

                                                
                                    
x
+
TestPause/serial/Pause (5.37s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-154627 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-154627 --alsologtostderr -v=5: exit status 80 (1.771227594s)

                                                
                                                
-- stdout --
	* Pausing node pause-154627 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:06:01.162995  252445 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:06:01.163113  252445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:06:01.163125  252445 out.go:374] Setting ErrFile to fd 2...
	I1213 09:06:01.163129  252445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:06:01.163312  252445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:06:01.163584  252445 out.go:368] Setting JSON to false
	I1213 09:06:01.163602  252445 mustload.go:66] Loading cluster: pause-154627
	I1213 09:06:01.164035  252445 config.go:182] Loaded profile config "pause-154627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:06:01.164464  252445 cli_runner.go:164] Run: docker container inspect pause-154627 --format={{.State.Status}}
	I1213 09:06:01.185307  252445 host.go:66] Checking if "pause-154627" exists ...
	I1213 09:06:01.185578  252445 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:06:01.241904  252445 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-13 09:06:01.231388846 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:06:01.242781  252445 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-154627 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 09:06:01.244905  252445 out.go:179] * Pausing node pause-154627 ... 
	I1213 09:06:01.246242  252445 host.go:66] Checking if "pause-154627" exists ...
	I1213 09:06:01.246649  252445 ssh_runner.go:195] Run: systemctl --version
	I1213 09:06:01.246710  252445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-154627
	I1213 09:06:01.266517  252445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/pause-154627/id_rsa Username:docker}
	I1213 09:06:01.366023  252445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:06:01.380817  252445 pause.go:52] kubelet running: true
	I1213 09:06:01.380925  252445 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:06:01.518277  252445 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:06:01.518375  252445 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:06:01.591315  252445 cri.go:89] found id: "ec638ae8ac11e6abe28859809e9150bfb5486e11a4f4adae91efb74a3173f5bc"
	I1213 09:06:01.591336  252445 cri.go:89] found id: "6a46808052602df5d0831b755246c5fe971f0f52075bfae1b49145aa17a0411a"
	I1213 09:06:01.591340  252445 cri.go:89] found id: "5f6879a119d15aec10fc047dc4d90bbed854ebe2f056952892db23713a69f493"
	I1213 09:06:01.591343  252445 cri.go:89] found id: "bc3f8ae67ef0c377544be455aad6ef4ca54298a1886c410c6322d13dcffe2817"
	I1213 09:06:01.591346  252445 cri.go:89] found id: "e146505fbdb9eaecb819e4208bddb167aeff33bd9b8eea7eef6387fc3b08173e"
	I1213 09:06:01.591349  252445 cri.go:89] found id: "3ced2f862795cb2fce5d4764171f179e2773b29ad8f75125c10d2f2afb66900b"
	I1213 09:06:01.591352  252445 cri.go:89] found id: "a5aa4a46d79b87fa07b08b5190ff278a4e8b3ed0babc007ab2be3a0c5eb350ec"
	I1213 09:06:01.591354  252445 cri.go:89] found id: ""
	I1213 09:06:01.591392  252445 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:06:01.607036  252445 retry.go:31] will retry after 222.386048ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:06:01Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:06:01.830630  252445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:06:01.845087  252445 pause.go:52] kubelet running: false
	I1213 09:06:01.845172  252445 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:06:01.972586  252445 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:06:01.972759  252445 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:06:02.051478  252445 cri.go:89] found id: "ec638ae8ac11e6abe28859809e9150bfb5486e11a4f4adae91efb74a3173f5bc"
	I1213 09:06:02.051518  252445 cri.go:89] found id: "6a46808052602df5d0831b755246c5fe971f0f52075bfae1b49145aa17a0411a"
	I1213 09:06:02.051525  252445 cri.go:89] found id: "5f6879a119d15aec10fc047dc4d90bbed854ebe2f056952892db23713a69f493"
	I1213 09:06:02.051533  252445 cri.go:89] found id: "bc3f8ae67ef0c377544be455aad6ef4ca54298a1886c410c6322d13dcffe2817"
	I1213 09:06:02.051538  252445 cri.go:89] found id: "e146505fbdb9eaecb819e4208bddb167aeff33bd9b8eea7eef6387fc3b08173e"
	I1213 09:06:02.051543  252445 cri.go:89] found id: "3ced2f862795cb2fce5d4764171f179e2773b29ad8f75125c10d2f2afb66900b"
	I1213 09:06:02.051547  252445 cri.go:89] found id: "a5aa4a46d79b87fa07b08b5190ff278a4e8b3ed0babc007ab2be3a0c5eb350ec"
	I1213 09:06:02.051552  252445 cri.go:89] found id: ""
	I1213 09:06:02.051591  252445 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:06:02.065134  252445 retry.go:31] will retry after 510.361779ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:06:02Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:06:02.575723  252445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:06:02.590670  252445 pause.go:52] kubelet running: false
	I1213 09:06:02.590730  252445 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:06:02.754169  252445 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:06:02.754244  252445 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:06:02.846643  252445 cri.go:89] found id: "ec638ae8ac11e6abe28859809e9150bfb5486e11a4f4adae91efb74a3173f5bc"
	I1213 09:06:02.846681  252445 cri.go:89] found id: "6a46808052602df5d0831b755246c5fe971f0f52075bfae1b49145aa17a0411a"
	I1213 09:06:02.846687  252445 cri.go:89] found id: "5f6879a119d15aec10fc047dc4d90bbed854ebe2f056952892db23713a69f493"
	I1213 09:06:02.846693  252445 cri.go:89] found id: "bc3f8ae67ef0c377544be455aad6ef4ca54298a1886c410c6322d13dcffe2817"
	I1213 09:06:02.846698  252445 cri.go:89] found id: "e146505fbdb9eaecb819e4208bddb167aeff33bd9b8eea7eef6387fc3b08173e"
	I1213 09:06:02.846702  252445 cri.go:89] found id: "3ced2f862795cb2fce5d4764171f179e2773b29ad8f75125c10d2f2afb66900b"
	I1213 09:06:02.846709  252445 cri.go:89] found id: "a5aa4a46d79b87fa07b08b5190ff278a4e8b3ed0babc007ab2be3a0c5eb350ec"
	I1213 09:06:02.846713  252445 cri.go:89] found id: ""
	I1213 09:06:02.846760  252445 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:06:02.861288  252445 out.go:203] 
	W1213 09:06:02.862645  252445 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:06:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:06:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 09:06:02.862668  252445 out.go:285] * 
	* 
	W1213 09:06:02.866665  252445 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:06:02.868131  252445 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-154627 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-154627
helpers_test.go:244: (dbg) docker inspect pause-154627:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "21d07c33d7593d430c071e8ec4567efd0c489783fc7f55cd3ea16c87dfc55dbf",
	        "Created": "2025-12-13T09:04:48.46871142Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 240630,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:04:48.509736551Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/21d07c33d7593d430c071e8ec4567efd0c489783fc7f55cd3ea16c87dfc55dbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/21d07c33d7593d430c071e8ec4567efd0c489783fc7f55cd3ea16c87dfc55dbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/21d07c33d7593d430c071e8ec4567efd0c489783fc7f55cd3ea16c87dfc55dbf/hosts",
	        "LogPath": "/var/lib/docker/containers/21d07c33d7593d430c071e8ec4567efd0c489783fc7f55cd3ea16c87dfc55dbf/21d07c33d7593d430c071e8ec4567efd0c489783fc7f55cd3ea16c87dfc55dbf-json.log",
	        "Name": "/pause-154627",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-154627:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-154627",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "21d07c33d7593d430c071e8ec4567efd0c489783fc7f55cd3ea16c87dfc55dbf",
	                "LowerDir": "/var/lib/docker/overlay2/7275ec256e144d9c6ee79112502fd8e233fb3d5e9d825ca7fd0f9d334026607c-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7275ec256e144d9c6ee79112502fd8e233fb3d5e9d825ca7fd0f9d334026607c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7275ec256e144d9c6ee79112502fd8e233fb3d5e9d825ca7fd0f9d334026607c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7275ec256e144d9c6ee79112502fd8e233fb3d5e9d825ca7fd0f9d334026607c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-154627",
	                "Source": "/var/lib/docker/volumes/pause-154627/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-154627",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-154627",
	                "name.minikube.sigs.k8s.io": "pause-154627",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1e25e74792d47b776af0a26ad4ee8d05375f74eeb7fa4fe219e7291838873f3f",
	            "SandboxKey": "/var/run/docker/netns/1e25e74792d4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-154627": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "90d530bbac8b03396f3ea78a03ea5f23d30f164f9befb3f25533055510f14e64",
	                    "EndpointID": "ec3c5f2476989594142d389c6e00991445f2a33c8506429278de7ff2022b74f5",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "8e:c7:75:77:56:b9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-154627",
	                        "21d07c33d759"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-154627 -n pause-154627
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-154627 -n pause-154627: exit status 2 (325.265774ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-154627 logs -n 25
E1213 09:06:03.547548    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────┬──────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                     ARGS                                     │   PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────┼──────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-833990 sudo systemctl status kubelet --all --full --no-pager         │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:05 UTC │
	│ ssh     │ -p auto-833990 sudo systemctl cat kubelet --no-pager                         │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:05 UTC │
	│ ssh     │ -p auto-833990 sudo journalctl -xeu kubelet --all --full --no-pager          │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:05 UTC │
	│ ssh     │ -p auto-833990 sudo cat /etc/kubernetes/kubelet.conf                         │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:05 UTC │
	│ ssh     │ -p auto-833990 sudo cat /var/lib/kubelet/config.yaml                         │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:05 UTC │
	│ ssh     │ -p auto-833990 sudo systemctl status docker --all --full --no-pager          │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │                     │
	│ ssh     │ -p auto-833990 sudo systemctl cat docker --no-pager                          │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:05 UTC │
	│ ssh     │ -p auto-833990 sudo cat /etc/docker/daemon.json                              │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │                     │
	│ ssh     │ -p auto-833990 sudo docker system info                                       │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │                     │
	│ ssh     │ -p auto-833990 sudo systemctl status cri-docker --all --full --no-pager      │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │                     │
	│ ssh     │ -p auto-833990 sudo systemctl cat cri-docker --no-pager                      │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:05 UTC │
	│ ssh     │ -p auto-833990 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │                     │
	│ ssh     │ -p auto-833990 sudo cat /usr/lib/systemd/system/cri-docker.service           │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:05 UTC │
	│ ssh     │ -p auto-833990 sudo cri-dockerd --version                                    │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:05 UTC │
	│ ssh     │ -p auto-833990 sudo systemctl status containerd --all --full --no-pager      │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │                     │
	│ ssh     │ -p auto-833990 sudo systemctl cat containerd --no-pager                      │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:06 UTC │
	│ ssh     │ -p auto-833990 sudo cat /lib/systemd/system/containerd.service               │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ ssh     │ -p auto-833990 sudo cat /etc/containerd/config.toml                          │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ ssh     │ -p auto-833990 sudo containerd config dump                                   │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ ssh     │ -p auto-833990 sudo systemctl status crio --all --full --no-pager            │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ pause   │ -p pause-154627 --alsologtostderr -v=5                                       │ pause-154627 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p auto-833990 sudo systemctl cat crio --no-pager                            │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ ssh     │ -p auto-833990 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ ssh     │ -p auto-833990 sudo crio config                                              │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ delete  │ -p auto-833990                                                               │ auto-833990  │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────┴──────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:05:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:05:54.981853  248940 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:05:54.982096  248940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:05:54.982104  248940 out.go:374] Setting ErrFile to fd 2...
	I1213 09:05:54.982109  248940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:05:54.982333  248940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:05:54.982761  248940 out.go:368] Setting JSON to false
	I1213 09:05:54.984083  248940 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2907,"bootTime":1765613848,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:05:54.984136  248940 start.go:143] virtualization: kvm guest
	I1213 09:05:54.986267  248940 out.go:179] * [pause-154627] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:05:54.987859  248940 notify.go:221] Checking for updates...
	I1213 09:05:54.987881  248940 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:05:54.989261  248940 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:05:54.990747  248940 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:05:54.992131  248940 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:05:54.993736  248940 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:05:54.995056  248940 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:05:54.996832  248940 config.go:182] Loaded profile config "pause-154627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:05:54.997641  248940 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:05:55.024266  248940 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 09:05:55.024400  248940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:05:55.091936  248940 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-13 09:05:55.080633578 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:05:55.092137  248940 docker.go:319] overlay module found
	I1213 09:05:55.093958  248940 out.go:179] * Using the docker driver based on existing profile
	I1213 09:05:55.095239  248940 start.go:309] selected driver: docker
	I1213 09:05:55.095254  248940 start.go:927] validating driver "docker" against &{Name:pause-154627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-154627 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:05:55.095407  248940 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:05:55.095521  248940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:05:55.160500  248940 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-13 09:05:55.149196388 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:05:55.161203  248940 cni.go:84] Creating CNI manager for ""
	I1213 09:05:55.161269  248940 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:05:55.161310  248940 start.go:353] cluster config:
	{Name:pause-154627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-154627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:05:55.163293  248940 out.go:179] * Starting "pause-154627" primary control-plane node in "pause-154627" cluster
	I1213 09:05:55.164532  248940 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 09:05:55.165680  248940 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:05:55.168101  248940 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:05:55.168131  248940 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 09:05:55.168148  248940 cache.go:65] Caching tarball of preloaded images
	I1213 09:05:55.168154  248940 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:05:55.168221  248940 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:05:55.168232  248940 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 09:05:55.168344  248940 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/pause-154627/config.json ...
	I1213 09:05:55.189706  248940 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:05:55.189724  248940 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:05:55.189739  248940 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:05:55.189766  248940 start.go:360] acquireMachinesLock for pause-154627: {Name:mkd63111677b07585d22a79b1ec05d6e06d1aeeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:05:55.189846  248940 start.go:364] duration metric: took 61.576µs to acquireMachinesLock for "pause-154627"
	I1213 09:05:55.189867  248940 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:05:55.189872  248940 fix.go:54] fixHost starting: 
	I1213 09:05:55.190156  248940 cli_runner.go:164] Run: docker container inspect pause-154627 --format={{.State.Status}}
	I1213 09:05:55.208610  248940 fix.go:112] recreateIfNeeded on pause-154627: state=Running err=<nil>
	W1213 09:05:55.208636  248940 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 09:05:53.388823  234686 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.175102258s)
	W1213 09:05:53.388884  234686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52324->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:52324->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1213 09:05:53.388907  234686 logs.go:123] Gathering logs for kube-apiserver [226360325f058cc86edacf9b3c3a43364f41400e6b40d72b450e0dd95f4b943e] ...
	I1213 09:05:53.388929  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 226360325f058cc86edacf9b3c3a43364f41400e6b40d72b450e0dd95f4b943e"
	I1213 09:05:55.925778  234686 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1213 09:05:55.926174  234686 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1213 09:05:55.926226  234686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:05:55.926278  234686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:05:55.958250  234686 cri.go:89] found id: "226360325f058cc86edacf9b3c3a43364f41400e6b40d72b450e0dd95f4b943e"
	I1213 09:05:55.958272  234686 cri.go:89] found id: ""
	I1213 09:05:55.958282  234686 logs.go:282] 1 containers: [226360325f058cc86edacf9b3c3a43364f41400e6b40d72b450e0dd95f4b943e]
	I1213 09:05:55.958345  234686 ssh_runner.go:195] Run: which crictl
	I1213 09:05:55.962645  234686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 09:05:55.962716  234686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:05:55.996691  234686 cri.go:89] found id: ""
	I1213 09:05:55.996723  234686 logs.go:282] 0 containers: []
	W1213 09:05:55.996735  234686 logs.go:284] No container was found matching "etcd"
	I1213 09:05:55.996744  234686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 09:05:55.996804  234686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:05:56.030445  234686 cri.go:89] found id: ""
	I1213 09:05:56.030471  234686 logs.go:282] 0 containers: []
	W1213 09:05:56.030493  234686 logs.go:284] No container was found matching "coredns"
	I1213 09:05:56.030503  234686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:05:56.030562  234686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:05:56.061284  234686 cri.go:89] found id: "9126d55455668203ec08f57fb3c71ccdfef41b929c38bae5fdc6b2a919e89973"
	I1213 09:05:56.061305  234686 cri.go:89] found id: ""
	I1213 09:05:56.061315  234686 logs.go:282] 1 containers: [9126d55455668203ec08f57fb3c71ccdfef41b929c38bae5fdc6b2a919e89973]
	I1213 09:05:56.061368  234686 ssh_runner.go:195] Run: which crictl
	I1213 09:05:56.065566  234686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:05:56.065627  234686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:05:56.096678  234686 cri.go:89] found id: ""
	I1213 09:05:56.096699  234686 logs.go:282] 0 containers: []
	W1213 09:05:56.096707  234686 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:05:56.096712  234686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:05:56.096754  234686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:05:56.126737  234686 cri.go:89] found id: "e1267cad6e4876c97bd7d44d346a67ea8219ae53400896efdaad7bd4093b71cb"
	I1213 09:05:56.126762  234686 cri.go:89] found id: "f323705fb597e928f18f942a812e12b4e6685835d26d39289886cf9f3a25c3a6"
	I1213 09:05:56.126768  234686 cri.go:89] found id: ""
	I1213 09:05:56.126777  234686 logs.go:282] 2 containers: [e1267cad6e4876c97bd7d44d346a67ea8219ae53400896efdaad7bd4093b71cb f323705fb597e928f18f942a812e12b4e6685835d26d39289886cf9f3a25c3a6]
	I1213 09:05:56.126834  234686 ssh_runner.go:195] Run: which crictl
	I1213 09:05:56.130780  234686 ssh_runner.go:195] Run: which crictl
	I1213 09:05:56.134387  234686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 09:05:56.134443  234686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:05:56.163982  234686 cri.go:89] found id: ""
	I1213 09:05:56.164006  234686 logs.go:282] 0 containers: []
	W1213 09:05:56.164015  234686 logs.go:284] No container was found matching "kindnet"
	I1213 09:05:56.164021  234686 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:05:56.164065  234686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:05:56.194697  234686 cri.go:89] found id: ""
	I1213 09:05:56.194724  234686 logs.go:282] 0 containers: []
	W1213 09:05:56.194735  234686 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:05:56.194754  234686 logs.go:123] Gathering logs for kubelet ...
	I1213 09:05:56.194770  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:05:55.211133  248940 out.go:252] * Updating the running docker "pause-154627" container ...
	I1213 09:05:55.211162  248940 machine.go:94] provisionDockerMachine start ...
	I1213 09:05:55.211220  248940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-154627
	I1213 09:05:55.229683  248940 main.go:143] libmachine: Using SSH client type: native
	I1213 09:05:55.229934  248940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1213 09:05:55.229947  248940 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:05:55.367547  248940 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-154627
	
	I1213 09:05:55.367574  248940 ubuntu.go:182] provisioning hostname "pause-154627"
	I1213 09:05:55.367634  248940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-154627
	I1213 09:05:55.391815  248940 main.go:143] libmachine: Using SSH client type: native
	I1213 09:05:55.392186  248940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1213 09:05:55.392217  248940 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-154627 && echo "pause-154627" | sudo tee /etc/hostname
	I1213 09:05:55.541743  248940 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-154627
	
	I1213 09:05:55.541820  248940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-154627
	I1213 09:05:55.561679  248940 main.go:143] libmachine: Using SSH client type: native
	I1213 09:05:55.561899  248940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1213 09:05:55.561914  248940 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-154627' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-154627/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-154627' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:05:55.702673  248940 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:05:55.702704  248940 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-5776/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-5776/.minikube}
	I1213 09:05:55.702725  248940 ubuntu.go:190] setting up certificates
	I1213 09:05:55.702739  248940 provision.go:84] configureAuth start
	I1213 09:05:55.702797  248940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-154627
	I1213 09:05:55.722503  248940 provision.go:143] copyHostCerts
	I1213 09:05:55.722574  248940 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem, removing ...
	I1213 09:05:55.722595  248940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem
	I1213 09:05:55.722683  248940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem (1082 bytes)
	I1213 09:05:55.722802  248940 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem, removing ...
	I1213 09:05:55.722815  248940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem
	I1213 09:05:55.722858  248940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem (1123 bytes)
	I1213 09:05:55.722938  248940 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem, removing ...
	I1213 09:05:55.722949  248940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem
	I1213 09:05:55.722995  248940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem (1675 bytes)
	I1213 09:05:55.723072  248940 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem org=jenkins.pause-154627 san=[127.0.0.1 192.168.94.2 localhost minikube pause-154627]
	I1213 09:05:55.802730  248940 provision.go:177] copyRemoteCerts
	I1213 09:05:55.802792  248940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:05:55.802836  248940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-154627
	I1213 09:05:55.821417  248940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/pause-154627/id_rsa Username:docker}
	I1213 09:05:55.921653  248940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:05:55.942205  248940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 09:05:55.963932  248940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 09:05:55.983750  248940 provision.go:87] duration metric: took 280.986928ms to configureAuth
	I1213 09:05:55.983786  248940 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:05:55.984006  248940 config.go:182] Loaded profile config "pause-154627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:05:55.984125  248940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-154627
	I1213 09:05:56.008558  248940 main.go:143] libmachine: Using SSH client type: native
	I1213 09:05:56.008809  248940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1213 09:05:56.008834  248940 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 09:05:56.434299  248940 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 09:05:56.434320  248940 machine.go:97] duration metric: took 1.223152038s to provisionDockerMachine
	I1213 09:05:56.434332  248940 start.go:293] postStartSetup for "pause-154627" (driver="docker")
	I1213 09:05:56.434340  248940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:05:56.434404  248940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:05:56.434443  248940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-154627
	I1213 09:05:56.455372  248940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/pause-154627/id_rsa Username:docker}
	I1213 09:05:56.557595  248940 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:05:56.561929  248940 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:05:56.561971  248940 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:05:56.561989  248940 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/addons for local assets ...
	I1213 09:05:56.562052  248940 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/files for local assets ...
	I1213 09:05:56.562148  248940 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem -> 93032.pem in /etc/ssl/certs
	I1213 09:05:56.562271  248940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:05:56.570824  248940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:05:56.590032  248940 start.go:296] duration metric: took 155.686572ms for postStartSetup
	I1213 09:05:56.590120  248940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:05:56.590162  248940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-154627
	I1213 09:05:56.611445  248940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/pause-154627/id_rsa Username:docker}
	I1213 09:05:56.710044  248940 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:05:56.715718  248940 fix.go:56] duration metric: took 1.525840844s for fixHost
	I1213 09:05:56.715795  248940 start.go:83] releasing machines lock for "pause-154627", held for 1.525935336s
	I1213 09:05:56.715871  248940 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-154627
	I1213 09:05:56.738058  248940 ssh_runner.go:195] Run: cat /version.json
	I1213 09:05:56.738107  248940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-154627
	I1213 09:05:56.738174  248940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:05:56.738242  248940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-154627
	I1213 09:05:56.758621  248940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/pause-154627/id_rsa Username:docker}
	I1213 09:05:56.759060  248940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/pause-154627/id_rsa Username:docker}
	I1213 09:05:56.908876  248940 ssh_runner.go:195] Run: systemctl --version
	I1213 09:05:56.915725  248940 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 09:05:56.951116  248940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:05:56.955887  248940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:05:56.955963  248940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:05:56.966099  248940 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 09:05:56.966121  248940 start.go:496] detecting cgroup driver to use...
	I1213 09:05:56.966152  248940 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 09:05:56.966214  248940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:05:56.982249  248940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:05:56.996711  248940 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:05:56.996758  248940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:05:57.012679  248940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:05:57.028135  248940 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:05:57.157701  248940 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:05:57.274771  248940 docker.go:234] disabling docker service ...
	I1213 09:05:57.274831  248940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:05:57.290976  248940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:05:57.303904  248940 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:05:57.433071  248940 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:05:57.546284  248940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:05:57.560020  248940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:05:57.576095  248940 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 09:05:57.576147  248940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:05:57.586387  248940 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 09:05:57.586446  248940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:05:57.596796  248940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:05:57.607199  248940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:05:57.617610  248940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:05:57.626868  248940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:05:57.636822  248940 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:05:57.646321  248940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:05:57.656750  248940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:05:57.665285  248940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:05:57.674778  248940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:05:57.794836  248940 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 09:05:57.995670  248940 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 09:05:57.995748  248940 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 09:05:58.000192  248940 start.go:564] Will wait 60s for crictl version
	I1213 09:05:58.000264  248940 ssh_runner.go:195] Run: which crictl
	I1213 09:05:58.004386  248940 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:05:58.040156  248940 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 09:05:58.040265  248940 ssh_runner.go:195] Run: crio --version
	I1213 09:05:58.074244  248940 ssh_runner.go:195] Run: crio --version
	I1213 09:05:58.107629  248940 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 09:05:58.109034  248940 cli_runner.go:164] Run: docker network inspect pause-154627 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:05:58.127758  248940 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1213 09:05:58.131870  248940 kubeadm.go:884] updating cluster {Name:pause-154627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-154627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:05:58.132051  248940 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:05:58.132106  248940 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:05:58.162999  248940 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:05:58.163039  248940 crio.go:433] Images already preloaded, skipping extraction
	I1213 09:05:58.163087  248940 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:05:58.188949  248940 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:05:58.188973  248940 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:05:58.188982  248940 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.2 crio true true} ...
	I1213 09:05:58.189119  248940 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-154627 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-154627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:05:58.189212  248940 ssh_runner.go:195] Run: crio config
	I1213 09:05:58.238079  248940 cni.go:84] Creating CNI manager for ""
	I1213 09:05:58.238100  248940 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:05:58.238115  248940 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:05:58.238134  248940 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-154627 NodeName:pause-154627 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:05:58.238254  248940 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-154627"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:05:58.238321  248940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 09:05:58.247348  248940 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:05:58.247409  248940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:05:58.256126  248940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1213 09:05:58.271580  248940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 09:05:58.285281  248940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1213 09:05:58.299299  248940 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:05:58.303291  248940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:05:58.423355  248940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:05:58.437076  248940 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/pause-154627 for IP: 192.168.94.2
	I1213 09:05:58.437096  248940 certs.go:195] generating shared ca certs ...
	I1213 09:05:58.437113  248940 certs.go:227] acquiring lock for ca certs: {Name:mk80892d6e61f0acf7b97550b52b3341a040901c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:05:58.437249  248940 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key
	I1213 09:05:58.437298  248940 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key
	I1213 09:05:58.437312  248940 certs.go:257] generating profile certs ...
	I1213 09:05:58.437390  248940 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/pause-154627/client.key
	I1213 09:05:58.437435  248940 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/pause-154627/apiserver.key.76f6c2c4
	I1213 09:05:58.437503  248940 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/pause-154627/proxy-client.key
	I1213 09:05:58.437629  248940 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem (1338 bytes)
	W1213 09:05:58.437675  248940 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303_empty.pem, impossibly tiny 0 bytes
	I1213 09:05:58.437686  248940 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:05:58.437714  248940 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:05:58.437755  248940 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:05:58.437790  248940 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem (1675 bytes)
	I1213 09:05:58.437843  248940 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:05:58.438575  248940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:05:58.457363  248940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:05:58.475529  248940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:05:58.493222  248940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:05:58.511754  248940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/pause-154627/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 09:05:58.530381  248940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/pause-154627/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 09:05:58.549855  248940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/pause-154627/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:05:58.568838  248940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/pause-154627/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 09:05:58.587993  248940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:05:58.607018  248940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem --> /usr/share/ca-certificates/9303.pem (1338 bytes)
	I1213 09:05:58.630236  248940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /usr/share/ca-certificates/93032.pem (1708 bytes)
	I1213 09:05:58.651461  248940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:05:58.665243  248940 ssh_runner.go:195] Run: openssl version
	I1213 09:05:58.672608  248940 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:05:58.680316  248940 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:05:58.688991  248940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:05:58.694626  248940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:05:58.694683  248940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:05:58.737333  248940 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:05:58.746418  248940 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9303.pem
	I1213 09:05:58.754045  248940 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9303.pem /etc/ssl/certs/9303.pem
	I1213 09:05:58.761622  248940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9303.pem
	I1213 09:05:58.765559  248940 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:37 /usr/share/ca-certificates/9303.pem
	I1213 09:05:58.765614  248940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9303.pem
	I1213 09:05:58.800438  248940 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:05:58.808930  248940 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/93032.pem
	I1213 09:05:58.816449  248940 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/93032.pem /etc/ssl/certs/93032.pem
	I1213 09:05:58.824235  248940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93032.pem
	I1213 09:05:58.828601  248940 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:37 /usr/share/ca-certificates/93032.pem
	I1213 09:05:58.828669  248940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93032.pem
	I1213 09:05:58.866666  248940 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:05:58.874953  248940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:05:58.879315  248940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:05:58.932239  248940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:05:58.973027  248940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:05:59.009966  248940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:05:59.045606  248940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:05:59.082889  248940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:05:59.129643  248940 kubeadm.go:401] StartCluster: {Name:pause-154627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-154627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:05:59.129773  248940 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 09:05:59.129828  248940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:05:59.161193  248940 cri.go:89] found id: "ec638ae8ac11e6abe28859809e9150bfb5486e11a4f4adae91efb74a3173f5bc"
	I1213 09:05:59.161215  248940 cri.go:89] found id: "6a46808052602df5d0831b755246c5fe971f0f52075bfae1b49145aa17a0411a"
	I1213 09:05:59.161283  248940 cri.go:89] found id: "5f6879a119d15aec10fc047dc4d90bbed854ebe2f056952892db23713a69f493"
	I1213 09:05:59.161288  248940 cri.go:89] found id: "bc3f8ae67ef0c377544be455aad6ef4ca54298a1886c410c6322d13dcffe2817"
	I1213 09:05:59.161293  248940 cri.go:89] found id: "e146505fbdb9eaecb819e4208bddb167aeff33bd9b8eea7eef6387fc3b08173e"
	I1213 09:05:59.161298  248940 cri.go:89] found id: "3ced2f862795cb2fce5d4764171f179e2773b29ad8f75125c10d2f2afb66900b"
	I1213 09:05:59.161303  248940 cri.go:89] found id: "a5aa4a46d79b87fa07b08b5190ff278a4e8b3ed0babc007ab2be3a0c5eb350ec"
	I1213 09:05:59.161307  248940 cri.go:89] found id: ""
	I1213 09:05:59.161351  248940 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 09:05:59.174872  248940 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:05:59Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:05:59.174951  248940 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:05:59.184178  248940 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 09:05:59.184200  248940 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 09:05:59.184253  248940 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 09:05:59.194111  248940 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:05:59.195318  248940 kubeconfig.go:125] found "pause-154627" server: "https://192.168.94.2:8443"
	I1213 09:05:59.196762  248940 kapi.go:59] client config for pause-154627: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-5776/.minikube/profiles/pause-154627/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-5776/.minikube/profiles/pause-154627/client.key", CAFile:"/home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 09:05:59.197259  248940 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1213 09:05:59.197279  248940 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1213 09:05:59.197287  248940 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1213 09:05:59.197293  248940 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1213 09:05:59.197299  248940 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1213 09:05:59.197706  248940 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 09:05:59.206984  248940 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1213 09:05:59.207017  248940 kubeadm.go:602] duration metric: took 22.810846ms to restartPrimaryControlPlane
	I1213 09:05:59.207028  248940 kubeadm.go:403] duration metric: took 77.394844ms to StartCluster
	I1213 09:05:59.207046  248940 settings.go:142] acquiring lock: {Name:mkb7db33d439ccf76620dab7051026432361ebb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:05:59.207114  248940 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:05:59.208318  248940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:05:59.208618  248940 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:05:59.208697  248940 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 09:05:59.208895  248940 config.go:182] Loaded profile config "pause-154627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:05:59.210979  248940 out.go:179] * Enabled addons: 
	I1213 09:05:59.210990  248940 out.go:179] * Verifying Kubernetes components...
	I1213 09:05:59.212087  248940 addons.go:530] duration metric: took 3.398535ms for enable addons: enabled=[]
	I1213 09:05:59.212103  248940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:05:59.344244  248940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:05:59.359023  248940 node_ready.go:35] waiting up to 6m0s for node "pause-154627" to be "Ready" ...
	I1213 09:05:59.368007  248940 node_ready.go:49] node "pause-154627" is "Ready"
	I1213 09:05:59.368033  248940 node_ready.go:38] duration metric: took 8.975964ms for node "pause-154627" to be "Ready" ...
	I1213 09:05:59.368045  248940 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:05:59.368084  248940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:05:59.380141  248940 api_server.go:72] duration metric: took 171.475113ms to wait for apiserver process to appear ...
	I1213 09:05:59.380165  248940 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:05:59.380185  248940 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:05:59.385996  248940 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1213 09:05:59.386982  248940 api_server.go:141] control plane version: v1.34.2
	I1213 09:05:59.387008  248940 api_server.go:131] duration metric: took 6.835086ms to wait for apiserver health ...
	I1213 09:05:59.387020  248940 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:05:59.390442  248940 system_pods.go:59] 7 kube-system pods found
	I1213 09:05:59.390473  248940 system_pods.go:61] "coredns-66bc5c9577-hk5s7" [24cd5294-2cc0-4531-95d0-d76b080cfb9c] Running
	I1213 09:05:59.390480  248940 system_pods.go:61] "etcd-pause-154627" [68af6eec-55e4-4857-ba19-452c173f0a20] Running
	I1213 09:05:59.390518  248940 system_pods.go:61] "kindnet-6flbf" [270267f6-c0f2-46d3-ad8e-51338f23dfb1] Running
	I1213 09:05:59.390528  248940 system_pods.go:61] "kube-apiserver-pause-154627" [a41cb4bc-7340-4c8e-9a2b-2ecd61d3a14f] Running
	I1213 09:05:59.390534  248940 system_pods.go:61] "kube-controller-manager-pause-154627" [8b504109-70f4-4b9d-8226-ac85ef14b856] Running
	I1213 09:05:59.390541  248940 system_pods.go:61] "kube-proxy-fsr5p" [dfeb890f-7b80-4e29-96a3-4c35be793bfa] Running
	I1213 09:05:59.390546  248940 system_pods.go:61] "kube-scheduler-pause-154627" [ad53a8fb-6b9d-4498-8fba-22ac8e21fc87] Running
	I1213 09:05:59.390556  248940 system_pods.go:74] duration metric: took 3.528659ms to wait for pod list to return data ...
	I1213 09:05:59.390568  248940 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:05:59.392499  248940 default_sa.go:45] found service account: "default"
	I1213 09:05:59.392519  248940 default_sa.go:55] duration metric: took 1.943328ms for default service account to be created ...
	I1213 09:05:59.392528  248940 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 09:05:59.395133  248940 system_pods.go:86] 7 kube-system pods found
	I1213 09:05:59.395167  248940 system_pods.go:89] "coredns-66bc5c9577-hk5s7" [24cd5294-2cc0-4531-95d0-d76b080cfb9c] Running
	I1213 09:05:59.395177  248940 system_pods.go:89] "etcd-pause-154627" [68af6eec-55e4-4857-ba19-452c173f0a20] Running
	I1213 09:05:59.395182  248940 system_pods.go:89] "kindnet-6flbf" [270267f6-c0f2-46d3-ad8e-51338f23dfb1] Running
	I1213 09:05:59.395186  248940 system_pods.go:89] "kube-apiserver-pause-154627" [a41cb4bc-7340-4c8e-9a2b-2ecd61d3a14f] Running
	I1213 09:05:59.395190  248940 system_pods.go:89] "kube-controller-manager-pause-154627" [8b504109-70f4-4b9d-8226-ac85ef14b856] Running
	I1213 09:05:59.395193  248940 system_pods.go:89] "kube-proxy-fsr5p" [dfeb890f-7b80-4e29-96a3-4c35be793bfa] Running
	I1213 09:05:59.395197  248940 system_pods.go:89] "kube-scheduler-pause-154627" [ad53a8fb-6b9d-4498-8fba-22ac8e21fc87] Running
	I1213 09:05:59.395202  248940 system_pods.go:126] duration metric: took 2.668924ms to wait for k8s-apps to be running ...
	I1213 09:05:59.395213  248940 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 09:05:59.395251  248940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:05:59.408866  248940 system_svc.go:56] duration metric: took 13.644098ms WaitForService to wait for kubelet
	I1213 09:05:59.408894  248940 kubeadm.go:587] duration metric: took 200.241824ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:05:59.408918  248940 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:05:59.411456  248940 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:05:59.411518  248940 node_conditions.go:123] node cpu capacity is 8
	I1213 09:05:59.411537  248940 node_conditions.go:105] duration metric: took 2.609002ms to run NodePressure ...
	I1213 09:05:59.411551  248940 start.go:242] waiting for startup goroutines ...
	I1213 09:05:59.411565  248940 start.go:247] waiting for cluster config update ...
	I1213 09:05:59.411580  248940 start.go:256] writing updated cluster config ...
	I1213 09:05:59.411924  248940 ssh_runner.go:195] Run: rm -f paused
	I1213 09:05:59.415780  248940 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:05:59.416547  248940 kapi.go:59] client config for pause-154627: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22128-5776/.minikube/profiles/pause-154627/client.crt", KeyFile:"/home/jenkins/minikube-integration/22128-5776/.minikube/profiles/pause-154627/client.key", CAFile:"/home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1213 09:05:59.419655  248940 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hk5s7" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:05:59.424005  248940 pod_ready.go:94] pod "coredns-66bc5c9577-hk5s7" is "Ready"
	I1213 09:05:59.424024  248940 pod_ready.go:86] duration metric: took 4.279099ms for pod "coredns-66bc5c9577-hk5s7" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:05:59.426031  248940 pod_ready.go:83] waiting for pod "etcd-pause-154627" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:05:59.429390  248940 pod_ready.go:94] pod "etcd-pause-154627" is "Ready"
	I1213 09:05:59.429409  248940 pod_ready.go:86] duration metric: took 3.362487ms for pod "etcd-pause-154627" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:05:59.431251  248940 pod_ready.go:83] waiting for pod "kube-apiserver-pause-154627" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:05:59.435425  248940 pod_ready.go:94] pod "kube-apiserver-pause-154627" is "Ready"
	I1213 09:05:59.435440  248940 pod_ready.go:86] duration metric: took 4.170679ms for pod "kube-apiserver-pause-154627" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:05:59.437176  248940 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-154627" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:05:59.820229  248940 pod_ready.go:94] pod "kube-controller-manager-pause-154627" is "Ready"
	I1213 09:05:59.820267  248940 pod_ready.go:86] duration metric: took 383.059426ms for pod "kube-controller-manager-pause-154627" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:06:00.020432  248940 pod_ready.go:83] waiting for pod "kube-proxy-fsr5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:06:00.420135  248940 pod_ready.go:94] pod "kube-proxy-fsr5p" is "Ready"
	I1213 09:06:00.420170  248940 pod_ready.go:86] duration metric: took 399.712514ms for pod "kube-proxy-fsr5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:06:00.620372  248940 pod_ready.go:83] waiting for pod "kube-scheduler-pause-154627" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:06:01.020760  248940 pod_ready.go:94] pod "kube-scheduler-pause-154627" is "Ready"
	I1213 09:06:01.020786  248940 pod_ready.go:86] duration metric: took 400.387038ms for pod "kube-scheduler-pause-154627" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:06:01.020798  248940 pod_ready.go:40] duration metric: took 1.604975486s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:06:01.070119  248940 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 09:06:01.071794  248940 out.go:179] * Done! kubectl is now configured to use "pause-154627" cluster and "default" namespace by default
	I1213 09:05:56.260762  234686 logs.go:123] Gathering logs for kube-apiserver [226360325f058cc86edacf9b3c3a43364f41400e6b40d72b450e0dd95f4b943e] ...
	I1213 09:05:56.260791  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 226360325f058cc86edacf9b3c3a43364f41400e6b40d72b450e0dd95f4b943e"
	I1213 09:05:56.296736  234686 logs.go:123] Gathering logs for kube-controller-manager [f323705fb597e928f18f942a812e12b4e6685835d26d39289886cf9f3a25c3a6] ...
	I1213 09:05:56.296771  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f323705fb597e928f18f942a812e12b4e6685835d26d39289886cf9f3a25c3a6"
	I1213 09:05:56.328901  234686 logs.go:123] Gathering logs for CRI-O ...
	I1213 09:05:56.328930  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 09:05:56.392085  234686 logs.go:123] Gathering logs for dmesg ...
	I1213 09:05:56.392122  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:05:56.408052  234686 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:05:56.408091  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:05:56.474827  234686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:05:56.474850  234686 logs.go:123] Gathering logs for kube-scheduler [9126d55455668203ec08f57fb3c71ccdfef41b929c38bae5fdc6b2a919e89973] ...
	I1213 09:05:56.474880  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9126d55455668203ec08f57fb3c71ccdfef41b929c38bae5fdc6b2a919e89973"
	I1213 09:05:56.502765  234686 logs.go:123] Gathering logs for kube-controller-manager [e1267cad6e4876c97bd7d44d346a67ea8219ae53400896efdaad7bd4093b71cb] ...
	I1213 09:05:56.502794  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1267cad6e4876c97bd7d44d346a67ea8219ae53400896efdaad7bd4093b71cb"
	I1213 09:05:56.529418  234686 logs.go:123] Gathering logs for container status ...
	I1213 09:05:56.529452  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:05:59.063421  234686 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1213 09:05:59.063912  234686 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1213 09:05:59.063972  234686 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 09:05:59.064024  234686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 09:05:59.093789  234686 cri.go:89] found id: "226360325f058cc86edacf9b3c3a43364f41400e6b40d72b450e0dd95f4b943e"
	I1213 09:05:59.093814  234686 cri.go:89] found id: ""
	I1213 09:05:59.093823  234686 logs.go:282] 1 containers: [226360325f058cc86edacf9b3c3a43364f41400e6b40d72b450e0dd95f4b943e]
	I1213 09:05:59.093890  234686 ssh_runner.go:195] Run: which crictl
	I1213 09:05:59.097965  234686 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 09:05:59.098035  234686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 09:05:59.127936  234686 cri.go:89] found id: ""
	I1213 09:05:59.127958  234686 logs.go:282] 0 containers: []
	W1213 09:05:59.127970  234686 logs.go:284] No container was found matching "etcd"
	I1213 09:05:59.127977  234686 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 09:05:59.128027  234686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 09:05:59.163223  234686 cri.go:89] found id: ""
	I1213 09:05:59.163251  234686 logs.go:282] 0 containers: []
	W1213 09:05:59.163262  234686 logs.go:284] No container was found matching "coredns"
	I1213 09:05:59.163270  234686 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 09:05:59.163333  234686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 09:05:59.194092  234686 cri.go:89] found id: "9126d55455668203ec08f57fb3c71ccdfef41b929c38bae5fdc6b2a919e89973"
	I1213 09:05:59.194117  234686 cri.go:89] found id: ""
	I1213 09:05:59.194126  234686 logs.go:282] 1 containers: [9126d55455668203ec08f57fb3c71ccdfef41b929c38bae5fdc6b2a919e89973]
	I1213 09:05:59.194182  234686 ssh_runner.go:195] Run: which crictl
	I1213 09:05:59.198478  234686 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 09:05:59.198561  234686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 09:05:59.232769  234686 cri.go:89] found id: ""
	I1213 09:05:59.232796  234686 logs.go:282] 0 containers: []
	W1213 09:05:59.232808  234686 logs.go:284] No container was found matching "kube-proxy"
	I1213 09:05:59.232815  234686 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 09:05:59.232868  234686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 09:05:59.269878  234686 cri.go:89] found id: "e1267cad6e4876c97bd7d44d346a67ea8219ae53400896efdaad7bd4093b71cb"
	I1213 09:05:59.269902  234686 cri.go:89] found id: "f323705fb597e928f18f942a812e12b4e6685835d26d39289886cf9f3a25c3a6"
	I1213 09:05:59.269908  234686 cri.go:89] found id: ""
	I1213 09:05:59.269925  234686 logs.go:282] 2 containers: [e1267cad6e4876c97bd7d44d346a67ea8219ae53400896efdaad7bd4093b71cb f323705fb597e928f18f942a812e12b4e6685835d26d39289886cf9f3a25c3a6]
	I1213 09:05:59.269981  234686 ssh_runner.go:195] Run: which crictl
	I1213 09:05:59.274456  234686 ssh_runner.go:195] Run: which crictl
	I1213 09:05:59.278406  234686 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 09:05:59.278479  234686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 09:05:59.311291  234686 cri.go:89] found id: ""
	I1213 09:05:59.311314  234686 logs.go:282] 0 containers: []
	W1213 09:05:59.311326  234686 logs.go:284] No container was found matching "kindnet"
	I1213 09:05:59.311334  234686 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1213 09:05:59.311393  234686 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1213 09:05:59.339398  234686 cri.go:89] found id: ""
	I1213 09:05:59.339424  234686 logs.go:282] 0 containers: []
	W1213 09:05:59.339437  234686 logs.go:284] No container was found matching "storage-provisioner"
	I1213 09:05:59.339457  234686 logs.go:123] Gathering logs for kube-controller-manager [f323705fb597e928f18f942a812e12b4e6685835d26d39289886cf9f3a25c3a6] ...
	I1213 09:05:59.339471  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f323705fb597e928f18f942a812e12b4e6685835d26d39289886cf9f3a25c3a6"
	I1213 09:05:59.370532  234686 logs.go:123] Gathering logs for CRI-O ...
	I1213 09:05:59.370563  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 09:05:59.420254  234686 logs.go:123] Gathering logs for kube-apiserver [226360325f058cc86edacf9b3c3a43364f41400e6b40d72b450e0dd95f4b943e] ...
	I1213 09:05:59.420282  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 226360325f058cc86edacf9b3c3a43364f41400e6b40d72b450e0dd95f4b943e"
	I1213 09:05:59.455529  234686 logs.go:123] Gathering logs for kube-controller-manager [e1267cad6e4876c97bd7d44d346a67ea8219ae53400896efdaad7bd4093b71cb] ...
	I1213 09:05:59.455562  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e1267cad6e4876c97bd7d44d346a67ea8219ae53400896efdaad7bd4093b71cb"
	I1213 09:05:59.482739  234686 logs.go:123] Gathering logs for container status ...
	I1213 09:05:59.482771  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 09:05:59.516280  234686 logs.go:123] Gathering logs for kubelet ...
	I1213 09:05:59.516314  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1213 09:05:59.590842  234686 logs.go:123] Gathering logs for dmesg ...
	I1213 09:05:59.590881  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 09:05:59.608839  234686 logs.go:123] Gathering logs for describe nodes ...
	I1213 09:05:59.608870  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1213 09:05:59.670803  234686 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1213 09:05:59.670825  234686 logs.go:123] Gathering logs for kube-scheduler [9126d55455668203ec08f57fb3c71ccdfef41b929c38bae5fdc6b2a919e89973] ...
	I1213 09:05:59.670840  234686 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9126d55455668203ec08f57fb3c71ccdfef41b929c38bae5fdc6b2a919e89973"
	
	
	==> CRI-O <==
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.896863724Z" level=info msg="RDT not available in the host system"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.896885053Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.897725671Z" level=info msg="Conmon does support the --sync option"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.897747372Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.897762312Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.898478169Z" level=info msg="Conmon does support the --sync option"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.898527051Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.902834296Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.902852858Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.903387986Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.9038313Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.903895799Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.990207608Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-hk5s7 Namespace:kube-system ID:27ecb51aaf3fc08523c672be0184defe3e5a7740957b75d3d7bf60963ab1423e UID:24cd5294-2cc0-4531-95d0-d76b080cfb9c NetNS:/var/run/netns/a92c8894-f97b-4415-bf3d-03df1386e8df Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008e4050}] Aliases:map[]}"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.9904117Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-hk5s7 for CNI network kindnet (type=ptp)"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.990886207Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.990928019Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.990987746Z" level=info msg="Create NRI interface"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.991117197Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.991130488Z" level=info msg="runtime interface created"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.991143322Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.991150651Z" level=info msg="runtime interface starting up..."
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.99115908Z" level=info msg="starting plugins..."
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.99117721Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.991580932Z" level=info msg="No systemd watchdog enabled"
	Dec 13 09:05:57 pause-154627 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ec638ae8ac11e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago       Running             coredns                   0                   27ecb51aaf3fc       coredns-66bc5c9577-hk5s7               kube-system
	6a46808052602       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   53 seconds ago       Running             kindnet-cni               0                   7a6fd272d6299       kindnet-6flbf                          kube-system
	5f6879a119d15       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   53 seconds ago       Running             kube-proxy                0                   3fcca8a59ae86       kube-proxy-fsr5p                       kube-system
	bc3f8ae67ef0c       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   About a minute ago   Running             kube-controller-manager   0                   d524c8f23482f       kube-controller-manager-pause-154627   kube-system
	e146505fbdb9e       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   About a minute ago   Running             kube-apiserver            0                   1292c55fc3fdb       kube-apiserver-pause-154627            kube-system
	3ced2f862795c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Running             etcd                      0                   9271e23f3d78c       etcd-pause-154627                      kube-system
	a5aa4a46d79b8       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Running             kube-scheduler            0                   2fe4191d51d1d       kube-scheduler-pause-154627            kube-system
	
	
	==> coredns [ec638ae8ac11e6abe28859809e9150bfb5486e11a4f4adae91efb74a3173f5bc] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39738 - 60627 "HINFO IN 489782804459658984.4362697455592775689. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.118172092s
	
	
	==> describe nodes <==
	Name:               pause-154627
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-154627
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=pause-154627
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_05_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:05:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-154627
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:05:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:05:51 +0000   Sat, 13 Dec 2025 09:05:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:05:51 +0000   Sat, 13 Dec 2025 09:05:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:05:51 +0000   Sat, 13 Dec 2025 09:05:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:05:51 +0000   Sat, 13 Dec 2025 09:05:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-154627
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                ad4ab789-f1d3-4493-8c96-78f38e2f95d0
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-hk5s7                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     53s
	  kube-system                 etcd-pause-154627                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         59s
	  kube-system                 kindnet-6flbf                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-pause-154627             250m (3%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-pause-154627    200m (2%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-fsr5p                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-pause-154627             100m (1%)     0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 52s                kube-proxy       
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)  kubelet          Node pause-154627 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)  kubelet          Node pause-154627 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x8 over 65s)  kubelet          Node pause-154627 status is now: NodeHasSufficientPID
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s                kubelet          Node pause-154627 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s                kubelet          Node pause-154627 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s                kubelet          Node pause-154627 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node pause-154627 event: Registered Node pause-154627 in Controller
	  Normal  NodeReady                12s                kubelet          Node pause-154627 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.083084] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023653] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.640510] kauditd_printk_skb: 47 callbacks suppressed
	[Dec13 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +1.043569] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +1.023846] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +1.023869] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +1.023889] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +2.047766] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +4.031542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +8.511095] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 08:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[ +32.252585] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	
	
	==> etcd [3ced2f862795cb2fce5d4764171f179e2773b29ad8f75125c10d2f2afb66900b] <==
	{"level":"warn","ts":"2025-12-13T09:05:01.338747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.345392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.351782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.360512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.367213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.373993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.380738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.395529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.410665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.419677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.426173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.433069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.439421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.446201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.453448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.460769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.470031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.476448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.483053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.489852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.495992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.512290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.519705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.526038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.568351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36460","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:06:03 up 48 min,  0 user,  load average: 2.53, 2.60, 1.70
	Linux pause-154627 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6a46808052602df5d0831b755246c5fe971f0f52075bfae1b49145aa17a0411a] <==
	I1213 09:05:10.877836       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 09:05:10.878102       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1213 09:05:10.878259       1 main.go:148] setting mtu 1500 for CNI 
	I1213 09:05:10.878276       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 09:05:10.878298       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T09:05:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 09:05:11.083258       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 09:05:11.083280       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 09:05:11.083292       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 09:05:11.083700       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 09:05:41.083459       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1213 09:05:41.083726       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1213 09:05:41.084047       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1213 09:05:41.084147       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1213 09:05:42.784247       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 09:05:42.784283       1 metrics.go:72] Registering metrics
	I1213 09:05:42.784369       1 controller.go:711] "Syncing nftables rules"
	I1213 09:05:51.089859       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 09:05:51.089932       1 main.go:301] handling current node
	I1213 09:06:01.090608       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 09:06:01.090919       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e146505fbdb9eaecb819e4208bddb167aeff33bd9b8eea7eef6387fc3b08173e] <==
	I1213 09:05:02.094055       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 09:05:02.095131       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:05:02.098685       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 09:05:02.099459       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1213 09:05:02.100820       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:05:02.105747       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:05:02.105896       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 09:05:02.118373       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:05:02.997874       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1213 09:05:03.001944       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1213 09:05:03.001964       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 09:05:03.483446       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:05:03.519975       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:05:03.602415       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 09:05:03.608091       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1213 09:05:03.609015       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:05:03.613017       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:05:04.022285       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:05:04.795880       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:05:04.805777       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 09:05:04.815965       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 09:05:09.278530       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:05:09.283057       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:05:09.725637       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1213 09:05:10.124849       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [bc3f8ae67ef0c377544be455aad6ef4ca54298a1886c410c6322d13dcffe2817] <==
	I1213 09:05:09.021261       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 09:05:09.021283       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-154627"
	I1213 09:05:09.021328       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1213 09:05:09.022336       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 09:05:09.022354       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 09:05:09.022527       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 09:05:09.022608       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 09:05:09.022759       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 09:05:09.023085       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 09:05:09.024344       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 09:05:09.024368       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 09:05:09.025070       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 09:05:09.025075       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 09:05:09.025096       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 09:05:09.028464       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:05:09.028540       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1213 09:05:09.028597       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1213 09:05:09.028634       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1213 09:05:09.028641       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1213 09:05:09.028646       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 09:05:09.032095       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1213 09:05:09.036582       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-154627" podCIDRs=["10.244.0.0/24"]
	I1213 09:05:09.039426       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 09:05:09.044758       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:05:54.026523       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5f6879a119d15aec10fc047dc4d90bbed854ebe2f056952892db23713a69f493] <==
	I1213 09:05:10.740401       1 server_linux.go:53] "Using iptables proxy"
	I1213 09:05:10.822330       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:05:10.923229       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:05:10.923274       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1213 09:05:10.923372       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:05:10.943395       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 09:05:10.943448       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:05:10.949027       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:05:10.949520       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:05:10.949556       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:05:10.950986       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:05:10.951017       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:05:10.951043       1 config.go:200] "Starting service config controller"
	I1213 09:05:10.951064       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:05:10.951056       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:05:10.951074       1 config.go:309] "Starting node config controller"
	I1213 09:05:10.951100       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:05:10.951110       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:05:10.951113       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:05:11.051208       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:05:11.051221       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:05:11.051247       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a5aa4a46d79b87fa07b08b5190ff278a4e8b3ed0babc007ab2be3a0c5eb350ec] <==
	E1213 09:05:02.043480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 09:05:02.043539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 09:05:02.043584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 09:05:02.043909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 09:05:02.044036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 09:05:02.044379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 09:05:02.044455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:05:02.044537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 09:05:02.044561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 09:05:02.044573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 09:05:02.044632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 09:05:02.044473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 09:05:02.044658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 09:05:02.044798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 09:05:02.848914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 09:05:02.879360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 09:05:02.905799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 09:05:02.985232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 09:05:03.010608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 09:05:03.142753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 09:05:03.153926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:05:03.279521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 09:05:03.312615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 09:05:03.321640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1213 09:05:06.241079       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.047932    1331 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.850201    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/270267f6-c0f2-46d3-ad8e-51338f23dfb1-xtables-lock\") pod \"kindnet-6flbf\" (UID: \"270267f6-c0f2-46d3-ad8e-51338f23dfb1\") " pod="kube-system/kindnet-6flbf"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.850252    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/270267f6-c0f2-46d3-ad8e-51338f23dfb1-lib-modules\") pod \"kindnet-6flbf\" (UID: \"270267f6-c0f2-46d3-ad8e-51338f23dfb1\") " pod="kube-system/kindnet-6flbf"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.850280    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dfeb890f-7b80-4e29-96a3-4c35be793bfa-kube-proxy\") pod \"kube-proxy-fsr5p\" (UID: \"dfeb890f-7b80-4e29-96a3-4c35be793bfa\") " pod="kube-system/kube-proxy-fsr5p"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.850296    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/270267f6-c0f2-46d3-ad8e-51338f23dfb1-cni-cfg\") pod \"kindnet-6flbf\" (UID: \"270267f6-c0f2-46d3-ad8e-51338f23dfb1\") " pod="kube-system/kindnet-6flbf"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.850311    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfeb890f-7b80-4e29-96a3-4c35be793bfa-xtables-lock\") pod \"kube-proxy-fsr5p\" (UID: \"dfeb890f-7b80-4e29-96a3-4c35be793bfa\") " pod="kube-system/kube-proxy-fsr5p"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.850327    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf694\" (UniqueName: \"kubernetes.io/projected/dfeb890f-7b80-4e29-96a3-4c35be793bfa-kube-api-access-kf694\") pod \"kube-proxy-fsr5p\" (UID: \"dfeb890f-7b80-4e29-96a3-4c35be793bfa\") " pod="kube-system/kube-proxy-fsr5p"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.850353    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7fkv\" (UniqueName: \"kubernetes.io/projected/270267f6-c0f2-46d3-ad8e-51338f23dfb1-kube-api-access-t7fkv\") pod \"kindnet-6flbf\" (UID: \"270267f6-c0f2-46d3-ad8e-51338f23dfb1\") " pod="kube-system/kindnet-6flbf"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.850380    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfeb890f-7b80-4e29-96a3-4c35be793bfa-lib-modules\") pod \"kube-proxy-fsr5p\" (UID: \"dfeb890f-7b80-4e29-96a3-4c35be793bfa\") " pod="kube-system/kube-proxy-fsr5p"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: E1213 09:05:09.956620    1331 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 13 09:05:09 pause-154627 kubelet[1331]: E1213 09:05:09.956651    1331 projected.go:196] Error preparing data for projected volume kube-api-access-kf694 for pod kube-system/kube-proxy-fsr5p: configmap "kube-root-ca.crt" not found
	Dec 13 09:05:09 pause-154627 kubelet[1331]: E1213 09:05:09.956726    1331 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dfeb890f-7b80-4e29-96a3-4c35be793bfa-kube-api-access-kf694 podName:dfeb890f-7b80-4e29-96a3-4c35be793bfa nodeName:}" failed. No retries permitted until 2025-12-13 09:05:10.456698781 +0000 UTC m=+5.912373037 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kf694" (UniqueName: "kubernetes.io/projected/dfeb890f-7b80-4e29-96a3-4c35be793bfa-kube-api-access-kf694") pod "kube-proxy-fsr5p" (UID: "dfeb890f-7b80-4e29-96a3-4c35be793bfa") : configmap "kube-root-ca.crt" not found
	Dec 13 09:05:09 pause-154627 kubelet[1331]: E1213 09:05:09.957300    1331 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 13 09:05:09 pause-154627 kubelet[1331]: E1213 09:05:09.957327    1331 projected.go:196] Error preparing data for projected volume kube-api-access-t7fkv for pod kube-system/kindnet-6flbf: configmap "kube-root-ca.crt" not found
	Dec 13 09:05:09 pause-154627 kubelet[1331]: E1213 09:05:09.957374    1331 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/270267f6-c0f2-46d3-ad8e-51338f23dfb1-kube-api-access-t7fkv podName:270267f6-c0f2-46d3-ad8e-51338f23dfb1 nodeName:}" failed. No retries permitted until 2025-12-13 09:05:10.457359971 +0000 UTC m=+5.913034231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t7fkv" (UniqueName: "kubernetes.io/projected/270267f6-c0f2-46d3-ad8e-51338f23dfb1-kube-api-access-t7fkv") pod "kindnet-6flbf" (UID: "270267f6-c0f2-46d3-ad8e-51338f23dfb1") : configmap "kube-root-ca.crt" not found
	Dec 13 09:05:11 pause-154627 kubelet[1331]: I1213 09:05:11.680916    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fsr5p" podStartSLOduration=2.680891962 podStartE2EDuration="2.680891962s" podCreationTimestamp="2025-12-13 09:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:05:11.680790367 +0000 UTC m=+7.136464643" watchObservedRunningTime="2025-12-13 09:05:11.680891962 +0000 UTC m=+7.136566240"
	Dec 13 09:05:11 pause-154627 kubelet[1331]: I1213 09:05:11.681052    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-6flbf" podStartSLOduration=2.681042717 podStartE2EDuration="2.681042717s" podCreationTimestamp="2025-12-13 09:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:05:11.671068118 +0000 UTC m=+7.126742395" watchObservedRunningTime="2025-12-13 09:05:11.681042717 +0000 UTC m=+7.136716995"
	Dec 13 09:05:51 pause-154627 kubelet[1331]: I1213 09:05:51.437159    1331 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 13 09:05:51 pause-154627 kubelet[1331]: I1213 09:05:51.558049    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4srr\" (UniqueName: \"kubernetes.io/projected/24cd5294-2cc0-4531-95d0-d76b080cfb9c-kube-api-access-w4srr\") pod \"coredns-66bc5c9577-hk5s7\" (UID: \"24cd5294-2cc0-4531-95d0-d76b080cfb9c\") " pod="kube-system/coredns-66bc5c9577-hk5s7"
	Dec 13 09:05:51 pause-154627 kubelet[1331]: I1213 09:05:51.558105    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24cd5294-2cc0-4531-95d0-d76b080cfb9c-config-volume\") pod \"coredns-66bc5c9577-hk5s7\" (UID: \"24cd5294-2cc0-4531-95d0-d76b080cfb9c\") " pod="kube-system/coredns-66bc5c9577-hk5s7"
	Dec 13 09:05:52 pause-154627 kubelet[1331]: I1213 09:05:52.762422    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hk5s7" podStartSLOduration=42.762396498 podStartE2EDuration="42.762396498s" podCreationTimestamp="2025-12-13 09:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:05:52.762368608 +0000 UTC m=+48.218042884" watchObservedRunningTime="2025-12-13 09:05:52.762396498 +0000 UTC m=+48.218070775"
	Dec 13 09:06:01 pause-154627 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 09:06:01 pause-154627 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 09:06:01 pause-154627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:01 pause-154627 systemd[1]: kubelet.service: Consumed 2.257s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-154627 -n pause-154627
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-154627 -n pause-154627: exit status 2 (341.821376ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-154627 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-154627
helpers_test.go:244: (dbg) docker inspect pause-154627:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "21d07c33d7593d430c071e8ec4567efd0c489783fc7f55cd3ea16c87dfc55dbf",
	        "Created": "2025-12-13T09:04:48.46871142Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 240630,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:04:48.509736551Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/21d07c33d7593d430c071e8ec4567efd0c489783fc7f55cd3ea16c87dfc55dbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/21d07c33d7593d430c071e8ec4567efd0c489783fc7f55cd3ea16c87dfc55dbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/21d07c33d7593d430c071e8ec4567efd0c489783fc7f55cd3ea16c87dfc55dbf/hosts",
	        "LogPath": "/var/lib/docker/containers/21d07c33d7593d430c071e8ec4567efd0c489783fc7f55cd3ea16c87dfc55dbf/21d07c33d7593d430c071e8ec4567efd0c489783fc7f55cd3ea16c87dfc55dbf-json.log",
	        "Name": "/pause-154627",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-154627:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-154627",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "21d07c33d7593d430c071e8ec4567efd0c489783fc7f55cd3ea16c87dfc55dbf",
	                "LowerDir": "/var/lib/docker/overlay2/7275ec256e144d9c6ee79112502fd8e233fb3d5e9d825ca7fd0f9d334026607c-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7275ec256e144d9c6ee79112502fd8e233fb3d5e9d825ca7fd0f9d334026607c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7275ec256e144d9c6ee79112502fd8e233fb3d5e9d825ca7fd0f9d334026607c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7275ec256e144d9c6ee79112502fd8e233fb3d5e9d825ca7fd0f9d334026607c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-154627",
	                "Source": "/var/lib/docker/volumes/pause-154627/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-154627",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-154627",
	                "name.minikube.sigs.k8s.io": "pause-154627",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1e25e74792d47b776af0a26ad4ee8d05375f74eeb7fa4fe219e7291838873f3f",
	            "SandboxKey": "/var/run/docker/netns/1e25e74792d4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-154627": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "90d530bbac8b03396f3ea78a03ea5f23d30f164f9befb3f25533055510f14e64",
	                    "EndpointID": "ec3c5f2476989594142d389c6e00991445f2a33c8506429278de7ff2022b74f5",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "8e:c7:75:77:56:b9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-154627",
	                        "21d07c33d759"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-154627 -n pause-154627
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-154627 -n pause-154627: exit status 2 (338.70098ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-154627 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-154627 logs -n 25: (1.054897202s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-833990 sudo systemctl cat kubelet --no-pager                                                                                     │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:05 UTC │
	│ ssh     │ -p auto-833990 sudo journalctl -xeu kubelet --all --full --no-pager                                                                      │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:05 UTC │
	│ ssh     │ -p auto-833990 sudo cat /etc/kubernetes/kubelet.conf                                                                                     │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:05 UTC │
	│ ssh     │ -p auto-833990 sudo cat /var/lib/kubelet/config.yaml                                                                                     │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:05 UTC │
	│ ssh     │ -p auto-833990 sudo systemctl status docker --all --full --no-pager                                                                      │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │                     │
	│ ssh     │ -p auto-833990 sudo systemctl cat docker --no-pager                                                                                      │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:05 UTC │
	│ ssh     │ -p auto-833990 sudo cat /etc/docker/daemon.json                                                                                          │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │                     │
	│ ssh     │ -p auto-833990 sudo docker system info                                                                                                   │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │                     │
	│ ssh     │ -p auto-833990 sudo systemctl status cri-docker --all --full --no-pager                                                                  │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │                     │
	│ ssh     │ -p auto-833990 sudo systemctl cat cri-docker --no-pager                                                                                  │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:05 UTC │
	│ ssh     │ -p auto-833990 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                             │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │                     │
	│ ssh     │ -p auto-833990 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                       │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:05 UTC │
	│ ssh     │ -p auto-833990 sudo cri-dockerd --version                                                                                                │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:05 UTC │
	│ ssh     │ -p auto-833990 sudo systemctl status containerd --all --full --no-pager                                                                  │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │                     │
	│ ssh     │ -p auto-833990 sudo systemctl cat containerd --no-pager                                                                                  │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:05 UTC │ 13 Dec 25 09:06 UTC │
	│ ssh     │ -p auto-833990 sudo cat /lib/systemd/system/containerd.service                                                                           │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ ssh     │ -p auto-833990 sudo cat /etc/containerd/config.toml                                                                                      │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ ssh     │ -p auto-833990 sudo containerd config dump                                                                                               │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ ssh     │ -p auto-833990 sudo systemctl status crio --all --full --no-pager                                                                        │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ pause   │ -p pause-154627 --alsologtostderr -v=5                                                                                                   │ pause-154627   │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │                     │
	│ ssh     │ -p auto-833990 sudo systemctl cat crio --no-pager                                                                                        │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ ssh     │ -p auto-833990 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                              │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ ssh     │ -p auto-833990 sudo crio config                                                                                                          │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ delete  │ -p auto-833990                                                                                                                           │ auto-833990    │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │ 13 Dec 25 09:06 UTC │
	│ start   │ -p kindnet-833990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio │ kindnet-833990 │ jenkins │ v1.37.0 │ 13 Dec 25 09:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:06:04
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:06:04.991243  254310 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:06:04.991513  254310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:06:04.991525  254310 out.go:374] Setting ErrFile to fd 2...
	I1213 09:06:04.991532  254310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:06:04.991775  254310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:06:04.992380  254310 out.go:368] Setting JSON to false
	I1213 09:06:04.993679  254310 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2917,"bootTime":1765613848,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:06:04.993733  254310 start.go:143] virtualization: kvm guest
	I1213 09:06:04.995847  254310 out.go:179] * [kindnet-833990] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:06:04.997172  254310 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:06:04.997182  254310 notify.go:221] Checking for updates...
	I1213 09:06:04.999693  254310 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:06:05.001135  254310 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:06:05.002455  254310 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:06:05.004791  254310 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:06:05.005965  254310 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> CRI-O <==
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.896863724Z" level=info msg="RDT not available in the host system"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.896885053Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.897725671Z" level=info msg="Conmon does support the --sync option"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.897747372Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.897762312Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.898478169Z" level=info msg="Conmon does support the --sync option"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.898527051Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.902834296Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.902852858Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.903387986Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.9038313Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.903895799Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.990207608Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-hk5s7 Namespace:kube-system ID:27ecb51aaf3fc08523c672be0184defe3e5a7740957b75d3d7bf60963ab1423e UID:24cd5294-2cc0-4531-95d0-d76b080cfb9c NetNS:/var/run/netns/a92c8894-f97b-4415-bf3d-03df1386e8df Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008e4050}] Aliases:map[]}"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.9904117Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-hk5s7 for CNI network kindnet (type=ptp)"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.990886207Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.990928019Z" level=info msg="Starting seccomp notifier watcher"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.990987746Z" level=info msg="Create NRI interface"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.991117197Z" level=info msg="built-in NRI default validator is disabled"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.991130488Z" level=info msg="runtime interface created"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.991143322Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.991150651Z" level=info msg="runtime interface starting up..."
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.99115908Z" level=info msg="starting plugins..."
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.99117721Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 13 09:05:57 pause-154627 crio[2194]: time="2025-12-13T09:05:57.991580932Z" level=info msg="No systemd watchdog enabled"
	Dec 13 09:05:57 pause-154627 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ec638ae8ac11e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago       Running             coredns                   0                   27ecb51aaf3fc       coredns-66bc5c9577-hk5s7               kube-system
	6a46808052602       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   54 seconds ago       Running             kindnet-cni               0                   7a6fd272d6299       kindnet-6flbf                          kube-system
	5f6879a119d15       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   54 seconds ago       Running             kube-proxy                0                   3fcca8a59ae86       kube-proxy-fsr5p                       kube-system
	bc3f8ae67ef0c       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   About a minute ago   Running             kube-controller-manager   0                   d524c8f23482f       kube-controller-manager-pause-154627   kube-system
	e146505fbdb9e       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   About a minute ago   Running             kube-apiserver            0                   1292c55fc3fdb       kube-apiserver-pause-154627            kube-system
	3ced2f862795c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Running             etcd                      0                   9271e23f3d78c       etcd-pause-154627                      kube-system
	a5aa4a46d79b8       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Running             kube-scheduler            0                   2fe4191d51d1d       kube-scheduler-pause-154627            kube-system
	
	
	==> coredns [ec638ae8ac11e6abe28859809e9150bfb5486e11a4f4adae91efb74a3173f5bc] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39738 - 60627 "HINFO IN 489782804459658984.4362697455592775689. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.118172092s
	
	
	==> describe nodes <==
	Name:               pause-154627
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-154627
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=pause-154627
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_05_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:05:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-154627
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:05:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:05:51 +0000   Sat, 13 Dec 2025 09:05:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:05:51 +0000   Sat, 13 Dec 2025 09:05:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:05:51 +0000   Sat, 13 Dec 2025 09:05:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:05:51 +0000   Sat, 13 Dec 2025 09:05:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-154627
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                ad4ab789-f1d3-4493-8c96-78f38e2f95d0
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-hk5s7                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     55s
	  kube-system                 etcd-pause-154627                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         61s
	  kube-system                 kindnet-6flbf                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-pause-154627             250m (3%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-pause-154627    200m (2%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-fsr5p                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-pause-154627             100m (1%)     0 (0%)      0 (0%)           0 (0%)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 67s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  67s (x8 over 67s)  kubelet          Node pause-154627 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    67s (x8 over 67s)  kubelet          Node pause-154627 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     67s (x8 over 67s)  kubelet          Node pause-154627 status is now: NodeHasSufficientPID
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s                kubelet          Node pause-154627 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s                kubelet          Node pause-154627 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s                kubelet          Node pause-154627 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                node-controller  Node pause-154627 event: Registered Node pause-154627 in Controller
	  Normal  NodeReady                14s                kubelet          Node pause-154627 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.083084] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023653] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.640510] kauditd_printk_skb: 47 callbacks suppressed
	[Dec13 08:31] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +1.043569] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +1.023867] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +1.023846] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +1.023869] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +1.023889] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +2.047766] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +4.031542] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[  +8.511095] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 08:32] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[ +32.252585] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	
	
	==> etcd [3ced2f862795cb2fce5d4764171f179e2773b29ad8f75125c10d2f2afb66900b] <==
	{"level":"warn","ts":"2025-12-13T09:05:01.338747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.345392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.351782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.360512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.367213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.373993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.380738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.395529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.410665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.419677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.426173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.433069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.439421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.446201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.453448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.460769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.470031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.476448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.483053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.489852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.495992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.512290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.519705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.526038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:05:01.568351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36460","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:06:05 up 48 min,  0 user,  load average: 2.48, 2.59, 1.70
	Linux pause-154627 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6a46808052602df5d0831b755246c5fe971f0f52075bfae1b49145aa17a0411a] <==
	I1213 09:05:10.877836       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 09:05:10.878102       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1213 09:05:10.878259       1 main.go:148] setting mtu 1500 for CNI 
	I1213 09:05:10.878276       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 09:05:10.878298       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T09:05:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 09:05:11.083258       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 09:05:11.083280       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 09:05:11.083292       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 09:05:11.083700       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1213 09:05:41.083459       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1213 09:05:41.083726       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1213 09:05:41.084047       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1213 09:05:41.084147       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1213 09:05:42.784247       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 09:05:42.784283       1 metrics.go:72] Registering metrics
	I1213 09:05:42.784369       1 controller.go:711] "Syncing nftables rules"
	I1213 09:05:51.089859       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 09:05:51.089932       1 main.go:301] handling current node
	I1213 09:06:01.090608       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 09:06:01.090919       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e146505fbdb9eaecb819e4208bddb167aeff33bd9b8eea7eef6387fc3b08173e] <==
	I1213 09:05:02.094055       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 09:05:02.095131       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:05:02.098685       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 09:05:02.099459       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1213 09:05:02.100820       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:05:02.105747       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:05:02.105896       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 09:05:02.118373       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:05:02.997874       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1213 09:05:03.001944       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1213 09:05:03.001964       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 09:05:03.483446       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:05:03.519975       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:05:03.602415       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 09:05:03.608091       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1213 09:05:03.609015       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:05:03.613017       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:05:04.022285       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:05:04.795880       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:05:04.805777       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 09:05:04.815965       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 09:05:09.278530       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:05:09.283057       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:05:09.725637       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1213 09:05:10.124849       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [bc3f8ae67ef0c377544be455aad6ef4ca54298a1886c410c6322d13dcffe2817] <==
	I1213 09:05:09.021261       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 09:05:09.021283       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-154627"
	I1213 09:05:09.021328       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1213 09:05:09.022336       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1213 09:05:09.022354       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 09:05:09.022527       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 09:05:09.022608       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 09:05:09.022759       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 09:05:09.023085       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 09:05:09.024344       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 09:05:09.024368       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 09:05:09.025070       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 09:05:09.025075       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 09:05:09.025096       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 09:05:09.028464       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:05:09.028540       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1213 09:05:09.028597       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1213 09:05:09.028634       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1213 09:05:09.028641       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1213 09:05:09.028646       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 09:05:09.032095       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1213 09:05:09.036582       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-154627" podCIDRs=["10.244.0.0/24"]
	I1213 09:05:09.039426       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 09:05:09.044758       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:05:54.026523       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5f6879a119d15aec10fc047dc4d90bbed854ebe2f056952892db23713a69f493] <==
	I1213 09:05:10.740401       1 server_linux.go:53] "Using iptables proxy"
	I1213 09:05:10.822330       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:05:10.923229       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:05:10.923274       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1213 09:05:10.923372       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:05:10.943395       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 09:05:10.943448       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:05:10.949027       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:05:10.949520       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:05:10.949556       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:05:10.950986       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:05:10.951017       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:05:10.951043       1 config.go:200] "Starting service config controller"
	I1213 09:05:10.951064       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:05:10.951056       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:05:10.951074       1 config.go:309] "Starting node config controller"
	I1213 09:05:10.951100       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:05:10.951110       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:05:10.951113       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:05:11.051208       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:05:11.051221       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:05:11.051247       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a5aa4a46d79b87fa07b08b5190ff278a4e8b3ed0babc007ab2be3a0c5eb350ec] <==
	E1213 09:05:02.043480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 09:05:02.043539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 09:05:02.043584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 09:05:02.043909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 09:05:02.044036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 09:05:02.044379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 09:05:02.044455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:05:02.044537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 09:05:02.044561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 09:05:02.044573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 09:05:02.044632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 09:05:02.044473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 09:05:02.044658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 09:05:02.044798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 09:05:02.848914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 09:05:02.879360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 09:05:02.905799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 09:05:02.985232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 09:05:03.010608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 09:05:03.142753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 09:05:03.153926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:05:03.279521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 09:05:03.312615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 09:05:03.321640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1213 09:05:06.241079       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.047932    1331 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.850201    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/270267f6-c0f2-46d3-ad8e-51338f23dfb1-xtables-lock\") pod \"kindnet-6flbf\" (UID: \"270267f6-c0f2-46d3-ad8e-51338f23dfb1\") " pod="kube-system/kindnet-6flbf"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.850252    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/270267f6-c0f2-46d3-ad8e-51338f23dfb1-lib-modules\") pod \"kindnet-6flbf\" (UID: \"270267f6-c0f2-46d3-ad8e-51338f23dfb1\") " pod="kube-system/kindnet-6flbf"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.850280    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dfeb890f-7b80-4e29-96a3-4c35be793bfa-kube-proxy\") pod \"kube-proxy-fsr5p\" (UID: \"dfeb890f-7b80-4e29-96a3-4c35be793bfa\") " pod="kube-system/kube-proxy-fsr5p"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.850296    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/270267f6-c0f2-46d3-ad8e-51338f23dfb1-cni-cfg\") pod \"kindnet-6flbf\" (UID: \"270267f6-c0f2-46d3-ad8e-51338f23dfb1\") " pod="kube-system/kindnet-6flbf"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.850311    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfeb890f-7b80-4e29-96a3-4c35be793bfa-xtables-lock\") pod \"kube-proxy-fsr5p\" (UID: \"dfeb890f-7b80-4e29-96a3-4c35be793bfa\") " pod="kube-system/kube-proxy-fsr5p"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.850327    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf694\" (UniqueName: \"kubernetes.io/projected/dfeb890f-7b80-4e29-96a3-4c35be793bfa-kube-api-access-kf694\") pod \"kube-proxy-fsr5p\" (UID: \"dfeb890f-7b80-4e29-96a3-4c35be793bfa\") " pod="kube-system/kube-proxy-fsr5p"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.850353    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7fkv\" (UniqueName: \"kubernetes.io/projected/270267f6-c0f2-46d3-ad8e-51338f23dfb1-kube-api-access-t7fkv\") pod \"kindnet-6flbf\" (UID: \"270267f6-c0f2-46d3-ad8e-51338f23dfb1\") " pod="kube-system/kindnet-6flbf"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: I1213 09:05:09.850380    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfeb890f-7b80-4e29-96a3-4c35be793bfa-lib-modules\") pod \"kube-proxy-fsr5p\" (UID: \"dfeb890f-7b80-4e29-96a3-4c35be793bfa\") " pod="kube-system/kube-proxy-fsr5p"
	Dec 13 09:05:09 pause-154627 kubelet[1331]: E1213 09:05:09.956620    1331 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 13 09:05:09 pause-154627 kubelet[1331]: E1213 09:05:09.956651    1331 projected.go:196] Error preparing data for projected volume kube-api-access-kf694 for pod kube-system/kube-proxy-fsr5p: configmap "kube-root-ca.crt" not found
	Dec 13 09:05:09 pause-154627 kubelet[1331]: E1213 09:05:09.956726    1331 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dfeb890f-7b80-4e29-96a3-4c35be793bfa-kube-api-access-kf694 podName:dfeb890f-7b80-4e29-96a3-4c35be793bfa nodeName:}" failed. No retries permitted until 2025-12-13 09:05:10.456698781 +0000 UTC m=+5.912373037 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kf694" (UniqueName: "kubernetes.io/projected/dfeb890f-7b80-4e29-96a3-4c35be793bfa-kube-api-access-kf694") pod "kube-proxy-fsr5p" (UID: "dfeb890f-7b80-4e29-96a3-4c35be793bfa") : configmap "kube-root-ca.crt" not found
	Dec 13 09:05:09 pause-154627 kubelet[1331]: E1213 09:05:09.957300    1331 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 13 09:05:09 pause-154627 kubelet[1331]: E1213 09:05:09.957327    1331 projected.go:196] Error preparing data for projected volume kube-api-access-t7fkv for pod kube-system/kindnet-6flbf: configmap "kube-root-ca.crt" not found
	Dec 13 09:05:09 pause-154627 kubelet[1331]: E1213 09:05:09.957374    1331 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/270267f6-c0f2-46d3-ad8e-51338f23dfb1-kube-api-access-t7fkv podName:270267f6-c0f2-46d3-ad8e-51338f23dfb1 nodeName:}" failed. No retries permitted until 2025-12-13 09:05:10.457359971 +0000 UTC m=+5.913034231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t7fkv" (UniqueName: "kubernetes.io/projected/270267f6-c0f2-46d3-ad8e-51338f23dfb1-kube-api-access-t7fkv") pod "kindnet-6flbf" (UID: "270267f6-c0f2-46d3-ad8e-51338f23dfb1") : configmap "kube-root-ca.crt" not found
	Dec 13 09:05:11 pause-154627 kubelet[1331]: I1213 09:05:11.680916    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fsr5p" podStartSLOduration=2.680891962 podStartE2EDuration="2.680891962s" podCreationTimestamp="2025-12-13 09:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:05:11.680790367 +0000 UTC m=+7.136464643" watchObservedRunningTime="2025-12-13 09:05:11.680891962 +0000 UTC m=+7.136566240"
	Dec 13 09:05:11 pause-154627 kubelet[1331]: I1213 09:05:11.681052    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-6flbf" podStartSLOduration=2.681042717 podStartE2EDuration="2.681042717s" podCreationTimestamp="2025-12-13 09:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:05:11.671068118 +0000 UTC m=+7.126742395" watchObservedRunningTime="2025-12-13 09:05:11.681042717 +0000 UTC m=+7.136716995"
	Dec 13 09:05:51 pause-154627 kubelet[1331]: I1213 09:05:51.437159    1331 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 13 09:05:51 pause-154627 kubelet[1331]: I1213 09:05:51.558049    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4srr\" (UniqueName: \"kubernetes.io/projected/24cd5294-2cc0-4531-95d0-d76b080cfb9c-kube-api-access-w4srr\") pod \"coredns-66bc5c9577-hk5s7\" (UID: \"24cd5294-2cc0-4531-95d0-d76b080cfb9c\") " pod="kube-system/coredns-66bc5c9577-hk5s7"
	Dec 13 09:05:51 pause-154627 kubelet[1331]: I1213 09:05:51.558105    1331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24cd5294-2cc0-4531-95d0-d76b080cfb9c-config-volume\") pod \"coredns-66bc5c9577-hk5s7\" (UID: \"24cd5294-2cc0-4531-95d0-d76b080cfb9c\") " pod="kube-system/coredns-66bc5c9577-hk5s7"
	Dec 13 09:05:52 pause-154627 kubelet[1331]: I1213 09:05:52.762422    1331 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hk5s7" podStartSLOduration=42.762396498 podStartE2EDuration="42.762396498s" podCreationTimestamp="2025-12-13 09:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:05:52.762368608 +0000 UTC m=+48.218042884" watchObservedRunningTime="2025-12-13 09:05:52.762396498 +0000 UTC m=+48.218070775"
	Dec 13 09:06:01 pause-154627 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 09:06:01 pause-154627 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 09:06:01 pause-154627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:06:01 pause-154627 systemd[1]: kubelet.service: Consumed 2.257s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-154627 -n pause-154627
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-154627 -n pause-154627: exit status 2 (337.909367ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-154627 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-291522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-291522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (508.945441ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:09:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-291522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-291522 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-291522 describe deploy/metrics-server -n kube-system: exit status 1 (60.051219ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-291522 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-291522
helpers_test.go:244: (dbg) docker inspect no-preload-291522:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f",
	        "Created": "2025-12-13T09:09:03.465040092Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 307951,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:09:04.763613999Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f/hostname",
	        "HostsPath": "/var/lib/docker/containers/8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f/hosts",
	        "LogPath": "/var/lib/docker/containers/8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f/8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f-json.log",
	        "Name": "/no-preload-291522",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-291522:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-291522",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f",
	                "LowerDir": "/var/lib/docker/overlay2/403e75f5519deacbc31ed8646ccb8a414adf3a8394c0ecafea0ca0f3aa14db2e-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/403e75f5519deacbc31ed8646ccb8a414adf3a8394c0ecafea0ca0f3aa14db2e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/403e75f5519deacbc31ed8646ccb8a414adf3a8394c0ecafea0ca0f3aa14db2e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/403e75f5519deacbc31ed8646ccb8a414adf3a8394c0ecafea0ca0f3aa14db2e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-291522",
	                "Source": "/var/lib/docker/volumes/no-preload-291522/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-291522",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-291522",
	                "name.minikube.sigs.k8s.io": "no-preload-291522",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "53204f16545b4db7c5fd142083e81c862798da0474fcb18a200f6dff962ec56f",
	            "SandboxKey": "/var/run/docker/netns/53204f16545b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-291522": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dfb09ffcf6775e2f48749a68987a54ab42f782835937d81dea2e3e4a543a7d9d",
	                    "EndpointID": "209f5b4957b241ed89ca346c4e0bd93a8d1920335fbf95a05442274f55a33c4d",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "0e:15:df:8e:fd:f2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-291522",
	                        "8646883e9b39"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-291522 -n no-preload-291522
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-291522 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-291522 logs -n 25: (1.336265951s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-833990 sudo systemctl cat kubelet --no-pager                                                                                                 │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                  │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo cat /etc/kubernetes/kubelet.conf                                                                                                 │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo cat /var/lib/kubelet/config.yaml                                                                                                 │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo systemctl status docker --all --full --no-pager                                                                                  │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-833990 sudo systemctl cat docker --no-pager                                                                                                  │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo cat /etc/docker/daemon.json                                                                                                      │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-833990 sudo docker system info                                                                                                               │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-833990 sudo systemctl status cri-docker --all --full --no-pager                                                                              │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-833990 sudo systemctl cat cri-docker --no-pager                                                                                              │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                         │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-833990 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                   │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo cri-dockerd --version                                                                                                            │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo systemctl status containerd --all --full --no-pager                                                                              │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-833990 sudo systemctl cat containerd --no-pager                                                                                              │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo cat /lib/systemd/system/containerd.service                                                                                       │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo cat /etc/containerd/config.toml                                                                                                  │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo containerd config dump                                                                                                           │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo systemctl status crio --all --full --no-pager                                                                                    │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo systemctl cat crio --no-pager                                                                                                    │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                          │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo crio config                                                                                                                      │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ delete  │ -p bridge-833990                                                                                                                                       │ bridge-833990      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2 │ embed-certs-379362 │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-291522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                │ no-preload-291522  │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:09:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:09:53.276589  318834 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:09:53.276690  318834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:09:53.276694  318834 out.go:374] Setting ErrFile to fd 2...
	I1213 09:09:53.276698  318834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:09:53.276944  318834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:09:53.277398  318834 out.go:368] Setting JSON to false
	I1213 09:09:53.278829  318834 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3145,"bootTime":1765613848,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:09:53.278999  318834 start.go:143] virtualization: kvm guest
	I1213 09:09:53.281039  318834 out.go:179] * [embed-certs-379362] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:09:53.282775  318834 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:09:53.282773  318834 notify.go:221] Checking for updates...
	I1213 09:09:53.285223  318834 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:09:53.286580  318834 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:09:53.290984  318834 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:09:53.292280  318834 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:09:53.293543  318834 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:09:53.295129  318834 config.go:182] Loaded profile config "kubernetes-upgrade-814560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:09:53.295254  318834 config.go:182] Loaded profile config "no-preload-291522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:09:53.295338  318834 config.go:182] Loaded profile config "old-k8s-version-234538": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 09:09:53.295438  318834 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:09:53.321039  318834 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 09:09:53.321119  318834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:09:53.379204  318834 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 09:09:53.369406629 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:09:53.379315  318834 docker.go:319] overlay module found
	I1213 09:09:53.381202  318834 out.go:179] * Using the docker driver based on user configuration
	I1213 09:09:53.382325  318834 start.go:309] selected driver: docker
	I1213 09:09:53.382351  318834 start.go:927] validating driver "docker" against <nil>
	I1213 09:09:53.382363  318834 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:09:53.382890  318834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:09:53.441299  318834 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 09:09:53.43138383 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:09:53.441505  318834 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 09:09:53.441714  318834 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:09:53.443390  318834 out.go:179] * Using Docker driver with root privileges
	I1213 09:09:53.444531  318834 cni.go:84] Creating CNI manager for ""
	I1213 09:09:53.444592  318834 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:09:53.444606  318834 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 09:09:53.444686  318834 start.go:353] cluster config:
	{Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:09:53.446061  318834 out.go:179] * Starting "embed-certs-379362" primary control-plane node in "embed-certs-379362" cluster
	I1213 09:09:53.447220  318834 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 09:09:53.448400  318834 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:09:53.449579  318834 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:09:53.449612  318834 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 09:09:53.449623  318834 cache.go:65] Caching tarball of preloaded images
	I1213 09:09:53.449704  318834 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:09:53.449709  318834 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:09:53.449828  318834 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 09:09:53.449943  318834 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/config.json ...
	I1213 09:09:53.449967  318834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/config.json: {Name:mk7dfba1bcfca4eca45c7dccdef6b77ffab8fac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:09:53.470514  318834 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:09:53.470538  318834 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:09:53.470558  318834 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:09:53.470605  318834 start.go:360] acquireMachinesLock for embed-certs-379362: {Name:mk2ae32cc4beadbba6a2e4810e36036ee6a949ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:09:53.470718  318834 start.go:364] duration metric: took 91.904µs to acquireMachinesLock for "embed-certs-379362"
	I1213 09:09:53.470757  318834 start.go:93] Provisioning new machine with config: &{Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:09:53.470837  318834 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 13 09:09:45 no-preload-291522 crio[771]: time="2025-12-13T09:09:45.742879306Z" level=info msg="Starting container: 00be865b294d397f54766c7a3156d7eebbfb648bdf6f8884927954014f9cb729" id=265ff54c-22ed-4014-8480-8b8796d4931a name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:09:45 no-preload-291522 crio[771]: time="2025-12-13T09:09:45.744752161Z" level=info msg="Started container" PID=2839 containerID=00be865b294d397f54766c7a3156d7eebbfb648bdf6f8884927954014f9cb729 description=kube-system/coredns-7d764666f9-r95cr/coredns id=265ff54c-22ed-4014-8480-8b8796d4931a name=/runtime.v1.RuntimeService/StartContainer sandboxID=74876016e067a303b91ca73e8baf7490f8a655cc5e27d611724adc6781d29bfc
	Dec 13 09:09:48 no-preload-291522 crio[771]: time="2025-12-13T09:09:48.438519106Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ceb6d6e6-0b3f-4bfe-bd98-0b741bd023fa name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:09:48 no-preload-291522 crio[771]: time="2025-12-13T09:09:48.438601123Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:09:48 no-preload-291522 crio[771]: time="2025-12-13T09:09:48.444408356Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b9a6f685f900c9c029e8646461a95daa7feb8229239678984f43b67dc57c5587 UID:85e67eca-1cd0-4ca0-ad34-aed52941adf1 NetNS:/var/run/netns/e0595f94-09c4-4bcc-9458-3bc7b01072e6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e88940}] Aliases:map[]}"
	Dec 13 09:09:48 no-preload-291522 crio[771]: time="2025-12-13T09:09:48.444446071Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 13 09:09:48 no-preload-291522 crio[771]: time="2025-12-13T09:09:48.4551641Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b9a6f685f900c9c029e8646461a95daa7feb8229239678984f43b67dc57c5587 UID:85e67eca-1cd0-4ca0-ad34-aed52941adf1 NetNS:/var/run/netns/e0595f94-09c4-4bcc-9458-3bc7b01072e6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e88940}] Aliases:map[]}"
	Dec 13 09:09:48 no-preload-291522 crio[771]: time="2025-12-13T09:09:48.455299987Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 13 09:09:48 no-preload-291522 crio[771]: time="2025-12-13T09:09:48.456080274Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 09:09:48 no-preload-291522 crio[771]: time="2025-12-13T09:09:48.457182539Z" level=info msg="Ran pod sandbox b9a6f685f900c9c029e8646461a95daa7feb8229239678984f43b67dc57c5587 with infra container: default/busybox/POD" id=ceb6d6e6-0b3f-4bfe-bd98-0b741bd023fa name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:09:48 no-preload-291522 crio[771]: time="2025-12-13T09:09:48.458541316Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=97bf1ce8-b409-475c-a5bd-14693c7dd4d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:09:48 no-preload-291522 crio[771]: time="2025-12-13T09:09:48.458686175Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=97bf1ce8-b409-475c-a5bd-14693c7dd4d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:09:48 no-preload-291522 crio[771]: time="2025-12-13T09:09:48.458725104Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=97bf1ce8-b409-475c-a5bd-14693c7dd4d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:09:48 no-preload-291522 crio[771]: time="2025-12-13T09:09:48.460341408Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c674ade8-b45b-459b-a1e8-0317bb467961 name=/runtime.v1.ImageService/PullImage
	Dec 13 09:09:48 no-preload-291522 crio[771]: time="2025-12-13T09:09:48.461724724Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 13 09:09:49 no-preload-291522 crio[771]: time="2025-12-13T09:09:49.742906015Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c674ade8-b45b-459b-a1e8-0317bb467961 name=/runtime.v1.ImageService/PullImage
	Dec 13 09:09:49 no-preload-291522 crio[771]: time="2025-12-13T09:09:49.743608126Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4502a811-912b-4978-a0af-37ca6b60c0c1 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:09:49 no-preload-291522 crio[771]: time="2025-12-13T09:09:49.745463664Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=56c1a8c4-db8b-4815-90b6-cdcc5e628970 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:09:49 no-preload-291522 crio[771]: time="2025-12-13T09:09:49.748879348Z" level=info msg="Creating container: default/busybox/busybox" id=877caf19-fc04-454a-aebc-391544ab2606 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:09:49 no-preload-291522 crio[771]: time="2025-12-13T09:09:49.749008603Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:09:49 no-preload-291522 crio[771]: time="2025-12-13T09:09:49.752476736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:09:49 no-preload-291522 crio[771]: time="2025-12-13T09:09:49.752970906Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:09:49 no-preload-291522 crio[771]: time="2025-12-13T09:09:49.785601862Z" level=info msg="Created container f3a586b064d047617869d734904ca93f37573351e74aa0573e29c59155017993: default/busybox/busybox" id=877caf19-fc04-454a-aebc-391544ab2606 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:09:49 no-preload-291522 crio[771]: time="2025-12-13T09:09:49.786542154Z" level=info msg="Starting container: f3a586b064d047617869d734904ca93f37573351e74aa0573e29c59155017993" id=4b0334e0-407a-490f-a13e-bd7e53d2be2a name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:09:49 no-preload-291522 crio[771]: time="2025-12-13T09:09:49.788818599Z" level=info msg="Started container" PID=2914 containerID=f3a586b064d047617869d734904ca93f37573351e74aa0573e29c59155017993 description=default/busybox/busybox id=4b0334e0-407a-490f-a13e-bd7e53d2be2a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9a6f685f900c9c029e8646461a95daa7feb8229239678984f43b67dc57c5587
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f3a586b064d04       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   b9a6f685f900c       busybox                                     default
	00be865b294d3       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      12 seconds ago      Running             coredns                   0                   74876016e067a       coredns-7d764666f9-r95cr                    kube-system
	744fadd8624d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   a3304c9ccf438       storage-provisioner                         kube-system
	0d04f03a95e82       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   d5c53cfbd6fac       kindnet-sm6z6                               kube-system
	27833e2f6ea67       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                      25 seconds ago      Running             kube-proxy                0                   9e3a1294ca5af       kube-proxy-ktgbz                            kube-system
	20960ed00039c       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                      35 seconds ago      Running             kube-controller-manager   0                   0ec00b1ff1a62       kube-controller-manager-no-preload-291522   kube-system
	9d107f3727a05       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                      35 seconds ago      Running             kube-scheduler            0                   2dff494163f10       kube-scheduler-no-preload-291522            kube-system
	706ec79239421       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                      35 seconds ago      Running             kube-apiserver            0                   99b0da5aa8db5       kube-apiserver-no-preload-291522            kube-system
	56bb98a9641cc       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      35 seconds ago      Running             etcd                      0                   9b7ce6315a345       etcd-no-preload-291522                      kube-system
	
	
	==> coredns [00be865b294d397f54766c7a3156d7eebbfb648bdf6f8884927954014f9cb729] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:42808 - 43713 "HINFO IN 314992751938454988.186421325182187457. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.041534725s
	
	
	==> describe nodes <==
	Name:               no-preload-291522
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-291522
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=no-preload-291522
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_09_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:09:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-291522
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:09:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:09:57 +0000   Sat, 13 Dec 2025 09:09:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:09:57 +0000   Sat, 13 Dec 2025 09:09:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:09:57 +0000   Sat, 13 Dec 2025 09:09:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:09:57 +0000   Sat, 13 Dec 2025 09:09:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-291522
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                c2de4dd6-9253-460d-81e1-9ad6236c08d3
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-r95cr                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-291522                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-sm6z6                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-291522             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-291522    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-ktgbz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-291522             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node no-preload-291522 event: Registered Node no-preload-291522 in Controller
	
	
	==> dmesg <==
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[Dec13 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 92 30 31 19 d5 08 06
	[  +0.000699] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 37 88 97 25 dd 08 06
	[ +15.632521] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[ +20.216508] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 f7 62 38 81 f0 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[  +0.111008] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 c8 f7 8d 1a 5f 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[ +12.586108] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de cc 9f d9 2d 23 08 06
	[  +0.043792] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	[Dec13 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 62 87 e9 6b d3 37 08 06
	[  +0.000424] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	
	
	==> etcd [56bb98a9641cc14ddb7f5503d183ac9bab9c218ed9b7fb0bcf512f06ce49ef34] <==
	{"level":"warn","ts":"2025-12-13T09:09:23.884882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:23.893004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:23.899132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:23.905304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:23.913108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:23.920275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:23.927761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:23.936532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:23.943168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:23.954705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:23.960550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:23.966921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:23.973660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:23.980324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:23.986403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:23.993617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:23.999682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:24.006558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:24.014037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:24.030710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:24.033975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:24.040196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:24.047379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:24.053469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:09:24.103523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55660","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:09:58 up 52 min,  0 user,  load average: 3.45, 3.19, 2.15
	Linux no-preload-291522 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0d04f03a95e82967e3ccaf9432c559b818e9695bd988cd9d3ff2e59d54ab3837] <==
	I1213 09:09:34.640770       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 09:09:34.641056       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1213 09:09:34.641332       1 main.go:148] setting mtu 1500 for CNI 
	I1213 09:09:34.641367       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 09:09:34.641380       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T09:09:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 09:09:34.846382       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 09:09:34.846405       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 09:09:34.846415       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 09:09:34.924299       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 09:09:35.223691       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 09:09:35.223723       1 metrics.go:72] Registering metrics
	I1213 09:09:35.223808       1 controller.go:711] "Syncing nftables rules"
	I1213 09:09:44.847572       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 09:09:44.847657       1 main.go:301] handling current node
	I1213 09:09:54.845931       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 09:09:54.845983       1 main.go:301] handling current node
	
	
	==> kube-apiserver [706ec792394216a72547c6bcacdf4d4c5fd22b5ddcd2c17d4289040862a32e76] <==
	I1213 09:09:24.562150       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 09:09:24.563009       1 controller.go:667] quota admission added evaluator for: namespaces
	E1213 09:09:24.563912       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1213 09:09:24.565332       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1213 09:09:24.565429       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:09:24.570559       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:09:24.766672       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:09:25.469587       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1213 09:09:25.476745       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1213 09:09:25.476763       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1213 09:09:25.972906       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:09:26.010438       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:09:26.070649       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 09:09:26.076870       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1213 09:09:26.077921       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:09:26.082091       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:09:26.495840       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:09:26.954189       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:09:26.967584       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 09:09:26.978284       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 09:09:32.102255       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:09:32.108184       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:09:32.173193       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:09:32.496040       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1213 09:09:56.366712       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:41584: use of closed network connection
	
	
	==> kube-controller-manager [20960ed00039c35b52f3b98373b31c27c0aa21a41f2478dbfc3a7aa3b0fd1629] <==
	I1213 09:09:31.300837       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.300869       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.300883       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.300887       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.300950       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.301034       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.301217       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.301230       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.301258       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.301269       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.301287       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.301339       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.301748       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.301758       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.301814       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.301748       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.302057       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.307883       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.309816       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-291522" podCIDRs=["10.244.0.0/24"]
	I1213 09:09:31.317147       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:09:31.401022       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:31.401042       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 09:09:31.401048       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 09:09:31.418240       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:46.302242       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [27833e2f6ea675ee571cd51bee61dec3c501c0e17a5021ef1a04ed461e83ca11] <==
	I1213 09:09:32.957665       1 server_linux.go:53] "Using iptables proxy"
	I1213 09:09:33.045191       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:09:33.146645       1 shared_informer.go:377] "Caches are synced"
	I1213 09:09:33.146696       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1213 09:09:33.146841       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:09:33.182996       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 09:09:33.183143       1 server_linux.go:136] "Using iptables Proxier"
	I1213 09:09:33.191187       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:09:33.192115       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 09:09:33.194554       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:09:33.198384       1 config.go:200] "Starting service config controller"
	I1213 09:09:33.198451       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:09:33.198509       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:09:33.198537       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:09:33.198599       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:09:33.198625       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:09:33.199566       1 config.go:309] "Starting node config controller"
	I1213 09:09:33.204513       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:09:33.204533       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:09:33.299271       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:09:33.299315       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:09:33.299343       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9d107f3727a05c1ae86b468d6d03f49a4892f7af1bb521ecb19b56b82ac4491e] <==
	E1213 09:09:25.359031       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope"
	E1213 09:09:25.360151       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1213 09:09:25.378654       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1213 09:09:25.379667       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1213 09:09:25.411570       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1213 09:09:25.412887       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1213 09:09:25.459781       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 09:09:25.460986       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1213 09:09:25.525434       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope"
	E1213 09:09:25.526452       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1213 09:09:25.563768       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 09:09:25.568733       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1213 09:09:25.569812       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 09:09:25.571077       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1213 09:09:25.582332       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope"
	E1213 09:09:25.583347       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1213 09:09:25.592430       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope"
	E1213 09:09:25.593542       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1213 09:09:25.664974       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope"
	E1213 09:09:25.665920       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1213 09:09:25.681263       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1213 09:09:25.682399       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1213 09:09:25.735760       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope"
	E1213 09:09:25.739252       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	I1213 09:09:28.115026       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 13 09:09:32 no-preload-291522 kubelet[2233]: I1213 09:09:32.549703    2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7lzh\" (UniqueName: \"kubernetes.io/projected/b0bac974-dddb-4c41-8c17-b9f35a3e918a-kube-api-access-c7lzh\") pod \"kube-proxy-ktgbz\" (UID: \"b0bac974-dddb-4c41-8c17-b9f35a3e918a\") " pod="kube-system/kube-proxy-ktgbz"
	Dec 13 09:09:32 no-preload-291522 kubelet[2233]: I1213 09:09:32.549732    2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fc83086c-0e3b-4fb8-970e-573f20d37433-cni-cfg\") pod \"kindnet-sm6z6\" (UID: \"fc83086c-0e3b-4fb8-970e-573f20d37433\") " pod="kube-system/kindnet-sm6z6"
	Dec 13 09:09:32 no-preload-291522 kubelet[2233]: I1213 09:09:32.549801    2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc83086c-0e3b-4fb8-970e-573f20d37433-lib-modules\") pod \"kindnet-sm6z6\" (UID: \"fc83086c-0e3b-4fb8-970e-573f20d37433\") " pod="kube-system/kindnet-sm6z6"
	Dec 13 09:09:32 no-preload-291522 kubelet[2233]: I1213 09:09:32.549867    2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4gxr\" (UniqueName: \"kubernetes.io/projected/fc83086c-0e3b-4fb8-970e-573f20d37433-kube-api-access-f4gxr\") pod \"kindnet-sm6z6\" (UID: \"fc83086c-0e3b-4fb8-970e-573f20d37433\") " pod="kube-system/kindnet-sm6z6"
	Dec 13 09:09:32 no-preload-291522 kubelet[2233]: I1213 09:09:32.549922    2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b0bac974-dddb-4c41-8c17-b9f35a3e918a-kube-proxy\") pod \"kube-proxy-ktgbz\" (UID: \"b0bac974-dddb-4c41-8c17-b9f35a3e918a\") " pod="kube-system/kube-proxy-ktgbz"
	Dec 13 09:09:32 no-preload-291522 kubelet[2233]: I1213 09:09:32.549977    2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0bac974-dddb-4c41-8c17-b9f35a3e918a-lib-modules\") pod \"kube-proxy-ktgbz\" (UID: \"b0bac974-dddb-4c41-8c17-b9f35a3e918a\") " pod="kube-system/kube-proxy-ktgbz"
	Dec 13 09:09:32 no-preload-291522 kubelet[2233]: I1213 09:09:32.550090    2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0bac974-dddb-4c41-8c17-b9f35a3e918a-xtables-lock\") pod \"kube-proxy-ktgbz\" (UID: \"b0bac974-dddb-4c41-8c17-b9f35a3e918a\") " pod="kube-system/kube-proxy-ktgbz"
	Dec 13 09:09:32 no-preload-291522 kubelet[2233]: I1213 09:09:32.989861    2233 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-ktgbz" podStartSLOduration=0.989842145 podStartE2EDuration="989.842145ms" podCreationTimestamp="2025-12-13 09:09:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:09:32.989679007 +0000 UTC m=+6.191541578" watchObservedRunningTime="2025-12-13 09:09:32.989842145 +0000 UTC m=+6.191704710"
	Dec 13 09:09:38 no-preload-291522 kubelet[2233]: E1213 09:09:38.677310    2233 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-291522" containerName="kube-apiserver"
	Dec 13 09:09:38 no-preload-291522 kubelet[2233]: I1213 09:09:38.687000    2233 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-sm6z6" podStartSLOduration=5.167360396 podStartE2EDuration="6.686979057s" podCreationTimestamp="2025-12-13 09:09:32 +0000 UTC" firstStartedPulling="2025-12-13 09:09:32.82744421 +0000 UTC m=+6.029306756" lastFinishedPulling="2025-12-13 09:09:34.347062858 +0000 UTC m=+7.548925417" observedRunningTime="2025-12-13 09:09:34.992334089 +0000 UTC m=+8.194196655" watchObservedRunningTime="2025-12-13 09:09:38.686979057 +0000 UTC m=+11.888841623"
	Dec 13 09:09:39 no-preload-291522 kubelet[2233]: E1213 09:09:39.729731    2233 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-291522" containerName="kube-scheduler"
	Dec 13 09:09:41 no-preload-291522 kubelet[2233]: E1213 09:09:41.673732    2233 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-291522" containerName="kube-controller-manager"
	Dec 13 09:09:42 no-preload-291522 kubelet[2233]: E1213 09:09:42.077723    2233 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-291522" containerName="etcd"
	Dec 13 09:09:45 no-preload-291522 kubelet[2233]: I1213 09:09:45.365684    2233 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 13 09:09:45 no-preload-291522 kubelet[2233]: I1213 09:09:45.441832    2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtdrj\" (UniqueName: \"kubernetes.io/projected/04be029a-867d-492e-9950-26ff6399fa3b-kube-api-access-wtdrj\") pod \"coredns-7d764666f9-r95cr\" (UID: \"04be029a-867d-492e-9950-26ff6399fa3b\") " pod="kube-system/coredns-7d764666f9-r95cr"
	Dec 13 09:09:45 no-preload-291522 kubelet[2233]: I1213 09:09:45.441882    2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/51f37d10-278f-4664-bda2-35093a1fccb5-tmp\") pod \"storage-provisioner\" (UID: \"51f37d10-278f-4664-bda2-35093a1fccb5\") " pod="kube-system/storage-provisioner"
	Dec 13 09:09:45 no-preload-291522 kubelet[2233]: I1213 09:09:45.441979    2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwttx\" (UniqueName: \"kubernetes.io/projected/51f37d10-278f-4664-bda2-35093a1fccb5-kube-api-access-dwttx\") pod \"storage-provisioner\" (UID: \"51f37d10-278f-4664-bda2-35093a1fccb5\") " pod="kube-system/storage-provisioner"
	Dec 13 09:09:45 no-preload-291522 kubelet[2233]: I1213 09:09:45.442025    2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04be029a-867d-492e-9950-26ff6399fa3b-config-volume\") pod \"coredns-7d764666f9-r95cr\" (UID: \"04be029a-867d-492e-9950-26ff6399fa3b\") " pod="kube-system/coredns-7d764666f9-r95cr"
	Dec 13 09:09:46 no-preload-291522 kubelet[2233]: E1213 09:09:46.006581    2233 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-r95cr" containerName="coredns"
	Dec 13 09:09:46 no-preload-291522 kubelet[2233]: I1213 09:09:46.020247    2233 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-r95cr" podStartSLOduration=14.020225946 podStartE2EDuration="14.020225946s" podCreationTimestamp="2025-12-13 09:09:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:09:46.020028246 +0000 UTC m=+19.221890809" watchObservedRunningTime="2025-12-13 09:09:46.020225946 +0000 UTC m=+19.222088512"
	Dec 13 09:09:46 no-preload-291522 kubelet[2233]: I1213 09:09:46.029047    2233 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.029032338 podStartE2EDuration="13.029032338s" podCreationTimestamp="2025-12-13 09:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:09:46.02898237 +0000 UTC m=+19.230844936" watchObservedRunningTime="2025-12-13 09:09:46.029032338 +0000 UTC m=+19.230894904"
	Dec 13 09:09:47 no-preload-291522 kubelet[2233]: E1213 09:09:47.010710    2233 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-r95cr" containerName="coredns"
	Dec 13 09:09:48 no-preload-291522 kubelet[2233]: E1213 09:09:48.013907    2233 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-r95cr" containerName="coredns"
	Dec 13 09:09:48 no-preload-291522 kubelet[2233]: I1213 09:09:48.159544    2233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llnhq\" (UniqueName: \"kubernetes.io/projected/85e67eca-1cd0-4ca0-ad34-aed52941adf1-kube-api-access-llnhq\") pod \"busybox\" (UID: \"85e67eca-1cd0-4ca0-ad34-aed52941adf1\") " pod="default/busybox"
	Dec 13 09:09:50 no-preload-291522 kubelet[2233]: I1213 09:09:50.030894    2233 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.746162356 podStartE2EDuration="2.030872341s" podCreationTimestamp="2025-12-13 09:09:48 +0000 UTC" firstStartedPulling="2025-12-13 09:09:48.459955539 +0000 UTC m=+21.661818099" lastFinishedPulling="2025-12-13 09:09:49.74466545 +0000 UTC m=+22.946528084" observedRunningTime="2025-12-13 09:09:50.030476171 +0000 UTC m=+23.232338738" watchObservedRunningTime="2025-12-13 09:09:50.030872341 +0000 UTC m=+23.232734907"
	
	
	==> storage-provisioner [744fadd8624d60585b6182ac9bb342d2bb207e73722f4d053a3b43f7a1c736fb] <==
	I1213 09:09:45.749693       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:09:45.762300       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:09:45.762364       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 09:09:45.764611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:09:45.769376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:09:45.769639       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 09:09:45.769800       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-291522_0874e9d9-a6a1-447d-82de-c40b699eb809!
	I1213 09:09:45.769773       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7f21198f-5ecf-4114-b32b-88a1a9ef30f7", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-291522_0874e9d9-a6a1-447d-82de-c40b699eb809 became leader
	W1213 09:09:45.772657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:09:45.777325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:09:45.870868       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-291522_0874e9d9-a6a1-447d-82de-c40b699eb809!
	W1213 09:09:47.780417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:09:47.785514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:09:49.790057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:09:49.796340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:09:51.799467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:09:51.804790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:09:53.808710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:09:53.812816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:09:55.816259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:09:55.821379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:09:57.825127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:09:57.863735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-291522 -n no-preload-291522
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-291522 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-234538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-234538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (313.5396ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:09:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-234538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-234538 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-234538 describe deploy/metrics-server -n kube-system: exit status 1 (75.686497ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-234538 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-234538
helpers_test.go:244: (dbg) docker inspect old-k8s-version-234538:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e",
	        "Created": "2025-12-13T09:09:04.827842959Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 308045,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:09:04.866640087Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e/hosts",
	        "LogPath": "/var/lib/docker/containers/9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e/9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e-json.log",
	        "Name": "/old-k8s-version-234538",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-234538:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-234538",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e",
	                "LowerDir": "/var/lib/docker/overlay2/3ff0536271632f931d6d08f0fc2e635f1db6acd2a26a40bb7a01b3d549888fae-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ff0536271632f931d6d08f0fc2e635f1db6acd2a26a40bb7a01b3d549888fae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ff0536271632f931d6d08f0fc2e635f1db6acd2a26a40bb7a01b3d549888fae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ff0536271632f931d6d08f0fc2e635f1db6acd2a26a40bb7a01b3d549888fae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-234538",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-234538/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-234538",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-234538",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-234538",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e2b791ae44006592bde0e16ca7c6a25443624ab8fa6ab9f833a5dd9ffe6c9fda",
	            "SandboxKey": "/var/run/docker/netns/e2b791ae4400",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-234538": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03cd5b8c21bed175419d3254147f83d57d2a9fa170523cc5fcd50bb748af5603",
	                    "EndpointID": "5e0f83db3fb5a59c4890e75d6cb877aacc59319b25bd6c8edaa91d865f31b8cf",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "06:00:9a:32:e4:e7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-234538",
	                        "9956457b660b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234538 -n old-k8s-version-234538
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-234538 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-234538 logs -n 25: (1.022929263s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-833990 sudo cat /etc/kubernetes/kubelet.conf                                                                                                 │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo cat /var/lib/kubelet/config.yaml                                                                                                 │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo systemctl status docker --all --full --no-pager                                                                                  │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-833990 sudo systemctl cat docker --no-pager                                                                                                  │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo cat /etc/docker/daemon.json                                                                                                      │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-833990 sudo docker system info                                                                                                               │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-833990 sudo systemctl status cri-docker --all --full --no-pager                                                                              │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-833990 sudo systemctl cat cri-docker --no-pager                                                                                              │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                         │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-833990 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                   │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo cri-dockerd --version                                                                                                            │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo systemctl status containerd --all --full --no-pager                                                                              │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-833990 sudo systemctl cat containerd --no-pager                                                                                              │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo cat /lib/systemd/system/containerd.service                                                                                       │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo cat /etc/containerd/config.toml                                                                                                  │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo containerd config dump                                                                                                           │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo systemctl status crio --all --full --no-pager                                                                                    │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo systemctl cat crio --no-pager                                                                                                    │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                          │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo crio config                                                                                                                      │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ delete  │ -p bridge-833990                                                                                                                                       │ bridge-833990          │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2 │ embed-certs-379362     │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-291522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                │ no-preload-291522      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-234538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain           │ old-k8s-version-234538 │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ stop    │ -p no-preload-291522 --alsologtostderr -v=3                                                                                                            │ no-preload-291522      │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:09:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:09:53.276589  318834 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:09:53.276690  318834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:09:53.276694  318834 out.go:374] Setting ErrFile to fd 2...
	I1213 09:09:53.276698  318834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:09:53.276944  318834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:09:53.277398  318834 out.go:368] Setting JSON to false
	I1213 09:09:53.278829  318834 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3145,"bootTime":1765613848,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:09:53.278999  318834 start.go:143] virtualization: kvm guest
	I1213 09:09:53.281039  318834 out.go:179] * [embed-certs-379362] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:09:53.282775  318834 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:09:53.282773  318834 notify.go:221] Checking for updates...
	I1213 09:09:53.285223  318834 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:09:53.286580  318834 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:09:53.290984  318834 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:09:53.292280  318834 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:09:53.293543  318834 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:09:53.295129  318834 config.go:182] Loaded profile config "kubernetes-upgrade-814560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:09:53.295254  318834 config.go:182] Loaded profile config "no-preload-291522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:09:53.295338  318834 config.go:182] Loaded profile config "old-k8s-version-234538": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 09:09:53.295438  318834 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:09:53.321039  318834 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 09:09:53.321119  318834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:09:53.379204  318834 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 09:09:53.369406629 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:09:53.379315  318834 docker.go:319] overlay module found
	I1213 09:09:53.381202  318834 out.go:179] * Using the docker driver based on user configuration
	I1213 09:09:53.382325  318834 start.go:309] selected driver: docker
	I1213 09:09:53.382351  318834 start.go:927] validating driver "docker" against <nil>
	I1213 09:09:53.382363  318834 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:09:53.382890  318834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:09:53.441299  318834 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 09:09:53.43138383 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:09:53.441505  318834 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 09:09:53.441714  318834 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:09:53.443390  318834 out.go:179] * Using Docker driver with root privileges
	I1213 09:09:53.444531  318834 cni.go:84] Creating CNI manager for ""
	I1213 09:09:53.444592  318834 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:09:53.444606  318834 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 09:09:53.444686  318834 start.go:353] cluster config:
	{Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:09:53.446061  318834 out.go:179] * Starting "embed-certs-379362" primary control-plane node in "embed-certs-379362" cluster
	I1213 09:09:53.447220  318834 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 09:09:53.448400  318834 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:09:53.449579  318834 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:09:53.449612  318834 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 09:09:53.449623  318834 cache.go:65] Caching tarball of preloaded images
	I1213 09:09:53.449704  318834 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:09:53.449709  318834 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:09:53.449828  318834 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 09:09:53.449943  318834 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/config.json ...
	I1213 09:09:53.449967  318834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/config.json: {Name:mk7dfba1bcfca4eca45c7dccdef6b77ffab8fac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:09:53.470514  318834 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:09:53.470538  318834 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:09:53.470558  318834 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:09:53.470605  318834 start.go:360] acquireMachinesLock for embed-certs-379362: {Name:mk2ae32cc4beadbba6a2e4810e36036ee6a949ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:09:53.470718  318834 start.go:364] duration metric: took 91.904µs to acquireMachinesLock for "embed-certs-379362"
	I1213 09:09:53.470757  318834 start.go:93] Provisioning new machine with config: &{Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:09:53.470837  318834 start.go:125] createHost starting for "" (driver="docker")
	I1213 09:09:53.473369  318834 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 09:09:53.473627  318834 start.go:159] libmachine.API.Create for "embed-certs-379362" (driver="docker")
	I1213 09:09:53.473658  318834 client.go:173] LocalClient.Create starting
	I1213 09:09:53.473715  318834 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem
	I1213 09:09:53.473748  318834 main.go:143] libmachine: Decoding PEM data...
	I1213 09:09:53.473769  318834 main.go:143] libmachine: Parsing certificate...
	I1213 09:09:53.473842  318834 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem
	I1213 09:09:53.473865  318834 main.go:143] libmachine: Decoding PEM data...
	I1213 09:09:53.473878  318834 main.go:143] libmachine: Parsing certificate...
	I1213 09:09:53.474235  318834 cli_runner.go:164] Run: docker network inspect embed-certs-379362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 09:09:53.491688  318834 cli_runner.go:211] docker network inspect embed-certs-379362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 09:09:53.491765  318834 network_create.go:284] running [docker network inspect embed-certs-379362] to gather additional debugging logs...
	I1213 09:09:53.491788  318834 cli_runner.go:164] Run: docker network inspect embed-certs-379362
	W1213 09:09:53.509980  318834 cli_runner.go:211] docker network inspect embed-certs-379362 returned with exit code 1
	I1213 09:09:53.510007  318834 network_create.go:287] error running [docker network inspect embed-certs-379362]: docker network inspect embed-certs-379362: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-379362 not found
	I1213 09:09:53.510020  318834 network_create.go:289] output of [docker network inspect embed-certs-379362]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-379362 not found
	
	** /stderr **
	I1213 09:09:53.510139  318834 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:09:53.527644  318834 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b9f57735373a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:3a:37:6d:21:84} reservation:<nil>}
	I1213 09:09:53.528309  318834 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6ee6a6cb099f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9e:13:d6:80:5b:9d} reservation:<nil>}
	I1213 09:09:53.529045  318834 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9c992914162b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1e:b1:9a:07:84:35} reservation:<nil>}
	I1213 09:09:53.529682  318834 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-03cd5b8c21be IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:97:ed:da:2b:60} reservation:<nil>}
	I1213 09:09:53.530470  318834 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e1b710}
	I1213 09:09:53.530517  318834 network_create.go:124] attempt to create docker network embed-certs-379362 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1213 09:09:53.530561  318834 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-379362 embed-certs-379362
	I1213 09:09:53.580605  318834 network_create.go:108] docker network embed-certs-379362 192.168.85.0/24 created
	I1213 09:09:53.580635  318834 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-379362" container
	I1213 09:09:53.580700  318834 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 09:09:53.599073  318834 cli_runner.go:164] Run: docker volume create embed-certs-379362 --label name.minikube.sigs.k8s.io=embed-certs-379362 --label created_by.minikube.sigs.k8s.io=true
	I1213 09:09:53.618112  318834 oci.go:103] Successfully created a docker volume embed-certs-379362
	I1213 09:09:53.618197  318834 cli_runner.go:164] Run: docker run --rm --name embed-certs-379362-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-379362 --entrypoint /usr/bin/test -v embed-certs-379362:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 09:09:54.003864  318834 oci.go:107] Successfully prepared a docker volume embed-certs-379362
	I1213 09:09:54.003950  318834 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:09:54.003967  318834 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 09:09:54.004031  318834 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-379362:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 09:09:57.892954  318834 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-379362:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.888849744s)
	I1213 09:09:57.892992  318834 kic.go:203] duration metric: took 3.889024526s to extract preloaded images to volume ...
	W1213 09:09:57.893081  318834 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1213 09:09:57.893115  318834 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1213 09:09:57.893166  318834 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 09:09:57.953803  318834 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-379362 --name embed-certs-379362 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-379362 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-379362 --network embed-certs-379362 --ip 192.168.85.2 --volume embed-certs-379362:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 09:09:58.257879  318834 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Running}}
	
	
	==> CRI-O <==
	Dec 13 09:09:48 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:48.063741436Z" level=info msg="Starting container: 171953d1e4b1e0ad1ea444431f19c4eeacd0f3e6ea940a6597e30e53077a27b9" id=87f71d24-d226-425a-be63-7dd8942f5e05 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:09:48 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:48.06591773Z" level=info msg="Started container" PID=2127 containerID=171953d1e4b1e0ad1ea444431f19c4eeacd0f3e6ea940a6597e30e53077a27b9 description=kube-system/coredns-5dd5756b68-g66tb/coredns id=87f71d24-d226-425a-be63-7dd8942f5e05 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d8713d77baaed34123fd4e7502ce1dbe5a873b0115565ea1a21fd7bca1650436
	Dec 13 09:09:50 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:50.545524532Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9fa55cf5-a223-4f0e-836d-024e8d8fa7ed name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:09:50 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:50.545605947Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:09:50 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:50.550674693Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:254f58981ea4cab8d82823228d1dccd2cf0ef15e7cf97bc9221e135a9427109a UID:70491080-fd46-4699-bfa3-ed5f7e53ce0f NetNS:/var/run/netns/912cf255-c935-4118-98a0-0aa6084a80f1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e882c8}] Aliases:map[]}"
	Dec 13 09:09:50 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:50.550699053Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 13 09:09:50 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:50.560028883Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:254f58981ea4cab8d82823228d1dccd2cf0ef15e7cf97bc9221e135a9427109a UID:70491080-fd46-4699-bfa3-ed5f7e53ce0f NetNS:/var/run/netns/912cf255-c935-4118-98a0-0aa6084a80f1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e882c8}] Aliases:map[]}"
	Dec 13 09:09:50 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:50.56015067Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 13 09:09:50 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:50.56104174Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 09:09:50 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:50.56233155Z" level=info msg="Ran pod sandbox 254f58981ea4cab8d82823228d1dccd2cf0ef15e7cf97bc9221e135a9427109a with infra container: default/busybox/POD" id=9fa55cf5-a223-4f0e-836d-024e8d8fa7ed name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:09:50 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:50.563587243Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b4571d68-ded3-4f40-9108-33997d316377 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:09:50 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:50.563707536Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b4571d68-ded3-4f40-9108-33997d316377 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:09:50 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:50.563740371Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b4571d68-ded3-4f40-9108-33997d316377 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:09:50 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:50.564244279Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c31ee15d-091c-4911-8c4b-120850eb0464 name=/runtime.v1.ImageService/PullImage
	Dec 13 09:09:50 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:50.565682542Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 13 09:09:51 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:51.830541246Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c31ee15d-091c-4911-8c4b-120850eb0464 name=/runtime.v1.ImageService/PullImage
	Dec 13 09:09:51 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:51.83141106Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=095b3465-9573-4d34-a86f-0ccf10df9f12 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:09:51 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:51.833074912Z" level=info msg="Creating container: default/busybox/busybox" id=b3273fe2-ce8c-432e-ae6d-bddcfdf12844 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:09:51 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:51.833200245Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:09:51 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:51.836663905Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:09:51 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:51.837037271Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:09:51 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:51.865174797Z" level=info msg="Created container e23ce9faf7bcd14ea7eaf76414d5691c2a6029bd9371accdfe4831a9084539c1: default/busybox/busybox" id=b3273fe2-ce8c-432e-ae6d-bddcfdf12844 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:09:51 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:51.865811688Z" level=info msg="Starting container: e23ce9faf7bcd14ea7eaf76414d5691c2a6029bd9371accdfe4831a9084539c1" id=d58520f4-bf6a-4322-851a-5fe3b723cf4d name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:09:51 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:51.867420139Z" level=info msg="Started container" PID=2203 containerID=e23ce9faf7bcd14ea7eaf76414d5691c2a6029bd9371accdfe4831a9084539c1 description=default/busybox/busybox id=d58520f4-bf6a-4322-851a-5fe3b723cf4d name=/runtime.v1.RuntimeService/StartContainer sandboxID=254f58981ea4cab8d82823228d1dccd2cf0ef15e7cf97bc9221e135a9427109a
	Dec 13 09:09:58 old-k8s-version-234538 crio[771]: time="2025-12-13T09:09:58.345978967Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	e23ce9faf7bcd       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   254f58981ea4c       busybox                                          default
	171953d1e4b1e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      11 seconds ago      Running             coredns                   0                   d8713d77baaed       coredns-5dd5756b68-g66tb                         kube-system
	cadf23fe0c5ac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   f5f7770b2e9d8       storage-provisioner                              kube-system
	4d46264687cb6       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   89602b929f7b4       kindnet-9hllk                                    kube-system
	2dfc7116f0b13       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      24 seconds ago      Running             kube-proxy                0                   47f476900806c       kube-proxy-6bkvj                                 kube-system
	73794fd75e179       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      42 seconds ago      Running             etcd                      0                   866f86ade0741       etcd-old-k8s-version-234538                      kube-system
	3ca0143e8346c       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      42 seconds ago      Running             kube-scheduler            0                   c6966218424cc       kube-scheduler-old-k8s-version-234538            kube-system
	ed2b2fc3b97e9       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      42 seconds ago      Running             kube-controller-manager   0                   4b1ee3ada233f       kube-controller-manager-old-k8s-version-234538   kube-system
	fe8c9b40d06ae       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      42 seconds ago      Running             kube-apiserver            0                   173ce6f6f4120       kube-apiserver-old-k8s-version-234538            kube-system
	
	
	==> coredns [171953d1e4b1e0ad1ea444431f19c4eeacd0f3e6ea940a6597e30e53077a27b9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52495 - 64315 "HINFO IN 8344990637764439790.9051472452355472393. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.081596876s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-234538
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-234538
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=old-k8s-version-234538
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_09_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:09:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-234538
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:09:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:09:53 +0000   Sat, 13 Dec 2025 09:09:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:09:53 +0000   Sat, 13 Dec 2025 09:09:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:09:53 +0000   Sat, 13 Dec 2025 09:09:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:09:53 +0000   Sat, 13 Dec 2025 09:09:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-234538
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                b58ade41-ef0c-4ef7-817f-5090fbbdf23c
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-5dd5756b68-g66tb                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-old-k8s-version-234538                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         37s
	  kube-system                 kindnet-9hllk                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-old-k8s-version-234538             250m (3%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-old-k8s-version-234538    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-6bkvj                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-old-k8s-version-234538             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node old-k8s-version-234538 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node old-k8s-version-234538 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node old-k8s-version-234538 status is now: NodeHasSufficientPID
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s                kubelet          Node old-k8s-version-234538 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s                kubelet          Node old-k8s-version-234538 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s                kubelet          Node old-k8s-version-234538 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node old-k8s-version-234538 event: Registered Node old-k8s-version-234538 in Controller
	  Normal  NodeReady                12s                kubelet          Node old-k8s-version-234538 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[Dec13 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 92 30 31 19 d5 08 06
	[  +0.000699] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 37 88 97 25 dd 08 06
	[ +15.632521] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[ +20.216508] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 f7 62 38 81 f0 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[  +0.111008] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 c8 f7 8d 1a 5f 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[ +12.586108] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de cc 9f d9 2d 23 08 06
	[  +0.043792] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	[Dec13 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 62 87 e9 6b d3 37 08 06
	[  +0.000424] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	
	
	==> etcd [73794fd75e179e71aa86dd8379a96951732b7804c42520544f51a8d273c7854d] <==
	{"level":"info","ts":"2025-12-13T09:09:16.971228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-12-13T09:09:16.971396Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-13T09:09:16.972355Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-13T09:09:16.97247Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-13T09:09:16.972787Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-13T09:09:16.972675Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-13T09:09:16.972736Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-13T09:09:17.962344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-13T09:09:17.962386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-13T09:09:17.962411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-13T09:09:17.962425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-13T09:09:17.96243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-13T09:09:17.962438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-13T09:09:17.962445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-13T09:09:17.963325Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T09:09:17.963848Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T09:09:17.96387Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T09:09:17.963845Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-234538 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-13T09:09:17.964051Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-13T09:09:17.964085Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-13T09:09:17.964132Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T09:09:17.964305Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T09:09:17.96434Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-13T09:09:17.965018Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-13T09:09:17.965119Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:09:59 up 52 min,  0 user,  load average: 3.49, 3.21, 2.16
	Linux old-k8s-version-234538 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4d46264687cb6b244db0e570b70251d201258f8c975d5cf87e057583c20b762b] <==
	I1213 09:09:36.966976       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 09:09:36.967250       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1213 09:09:36.967400       1 main.go:148] setting mtu 1500 for CNI 
	I1213 09:09:36.967416       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 09:09:36.967427       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T09:09:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 09:09:37.163282       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 09:09:37.163316       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 09:09:37.163330       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 09:09:37.163530       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 09:09:37.463537       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 09:09:37.463558       1 metrics.go:72] Registering metrics
	I1213 09:09:37.463608       1 controller.go:711] "Syncing nftables rules"
	I1213 09:09:47.164335       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 09:09:47.164391       1 main.go:301] handling current node
	I1213 09:09:57.166304       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 09:09:57.166357       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fe8c9b40d06aee14effb44aa77619e85726cf21b49229f4105144bf0a9a14f13] <==
	I1213 09:09:19.207159       1 autoregister_controller.go:141] Starting autoregister controller
	I1213 09:09:19.207167       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 09:09:19.207177       1 cache.go:39] Caches are synced for autoregister controller
	I1213 09:09:19.207387       1 shared_informer.go:318] Caches are synced for configmaps
	I1213 09:09:19.207468       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1213 09:09:19.209405       1 controller.go:624] quota admission added evaluator for: namespaces
	I1213 09:09:19.212628       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1213 09:09:19.216227       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1213 09:09:19.419008       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:09:20.112615       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1213 09:09:20.116422       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1213 09:09:20.116456       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 09:09:20.540271       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:09:20.577119       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:09:20.717300       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 09:09:20.722647       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1213 09:09:20.723643       1 controller.go:624] quota admission added evaluator for: endpoints
	I1213 09:09:20.728573       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:09:21.140817       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1213 09:09:22.144583       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1213 09:09:22.154435       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 09:09:22.162745       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1213 09:09:34.553963       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1213 09:09:34.903961       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	E1213 09:09:58.377959       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.76.2:36514->192.168.76.2:10250: write: broken pipe
	
	
	==> kube-controller-manager [ed2b2fc3b97e91120326a1ec4b97888c53cb7753a8bd6adaba91282deadd8f2f] <==
	I1213 09:09:34.040403       1 shared_informer.go:318] Caches are synced for PV protection
	I1213 09:09:34.104828       1 shared_informer.go:318] Caches are synced for deployment
	I1213 09:09:34.189639       1 shared_informer.go:318] Caches are synced for disruption
	I1213 09:09:34.204792       1 shared_informer.go:318] Caches are synced for resource quota
	I1213 09:09:34.206363       1 shared_informer.go:318] Caches are synced for resource quota
	I1213 09:09:34.529647       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 09:09:34.553016       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 09:09:34.553052       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1213 09:09:34.567388       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6bkvj"
	I1213 09:09:34.570340       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9hllk"
	I1213 09:09:34.906069       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1213 09:09:34.921997       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1213 09:09:35.006460       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-g66tb"
	I1213 09:09:35.011924       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-kbqf6"
	I1213 09:09:35.024526       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="118.481329ms"
	I1213 09:09:35.030543       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-kbqf6"
	I1213 09:09:35.036209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.626074ms"
	I1213 09:09:35.044195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.934161ms"
	I1213 09:09:35.044322       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.298µs"
	I1213 09:09:47.705682       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.287µs"
	I1213 09:09:47.722883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.934µs"
	I1213 09:09:48.303743       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="118.162µs"
	I1213 09:09:48.331140       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.15963ms"
	I1213 09:09:48.331261       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.343µs"
	I1213 09:09:48.976160       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [2dfc7116f0b136903d0f803c5f8f4c0e8933ca5dec934365252da2dc6ad69758] <==
	I1213 09:09:35.573597       1 server_others.go:69] "Using iptables proxy"
	I1213 09:09:35.582800       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1213 09:09:35.601095       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 09:09:35.603525       1 server_others.go:152] "Using iptables Proxier"
	I1213 09:09:35.603550       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1213 09:09:35.603557       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1213 09:09:35.603578       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1213 09:09:35.603848       1 server.go:846] "Version info" version="v1.28.0"
	I1213 09:09:35.603863       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:09:35.604371       1 config.go:97] "Starting endpoint slice config controller"
	I1213 09:09:35.604404       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1213 09:09:35.604444       1 config.go:188] "Starting service config controller"
	I1213 09:09:35.604449       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1213 09:09:35.604567       1 config.go:315] "Starting node config controller"
	I1213 09:09:35.604602       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1213 09:09:35.704550       1 shared_informer.go:318] Caches are synced for service config
	I1213 09:09:35.704583       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1213 09:09:35.704672       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3ca0143e8346c8217672acdf44c919df7108250bb08f74b32dcbb07212561c5a] <==
	E1213 09:09:19.171579       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1213 09:09:19.171586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1213 09:09:19.171627       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 09:09:19.171665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1213 09:09:19.171711       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 09:09:19.171762       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1213 09:09:19.171717       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 09:09:19.171817       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1213 09:09:19.992266       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 09:09:19.992309       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1213 09:09:20.009172       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1213 09:09:20.009223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1213 09:09:20.208573       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 09:09:20.208608       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1213 09:09:20.242198       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1213 09:09:20.242235       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1213 09:09:20.297147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1213 09:09:20.297183       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1213 09:09:20.301675       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 09:09:20.301701       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1213 09:09:20.351395       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1213 09:09:20.351425       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1213 09:09:20.378146       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1213 09:09:20.378183       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1213 09:09:20.766542       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 13 09:09:34 old-k8s-version-234538 kubelet[1386]: I1213 09:09:34.585978    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4da661d0-6c2a-4cd4-afbe-c9e7f4f70f3e-xtables-lock\") pod \"kube-proxy-6bkvj\" (UID: \"4da661d0-6c2a-4cd4-afbe-c9e7f4f70f3e\") " pod="kube-system/kube-proxy-6bkvj"
	Dec 13 09:09:34 old-k8s-version-234538 kubelet[1386]: I1213 09:09:34.586007    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7646b3b-7180-4ac3-b998-dde77c66beb1-xtables-lock\") pod \"kindnet-9hllk\" (UID: \"e7646b3b-7180-4ac3-b998-dde77c66beb1\") " pod="kube-system/kindnet-9hllk"
	Dec 13 09:09:34 old-k8s-version-234538 kubelet[1386]: I1213 09:09:34.586034    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82wl5\" (UniqueName: \"kubernetes.io/projected/4da661d0-6c2a-4cd4-afbe-c9e7f4f70f3e-kube-api-access-82wl5\") pod \"kube-proxy-6bkvj\" (UID: \"4da661d0-6c2a-4cd4-afbe-c9e7f4f70f3e\") " pod="kube-system/kube-proxy-6bkvj"
	Dec 13 09:09:34 old-k8s-version-234538 kubelet[1386]: I1213 09:09:34.586061    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e7646b3b-7180-4ac3-b998-dde77c66beb1-cni-cfg\") pod \"kindnet-9hllk\" (UID: \"e7646b3b-7180-4ac3-b998-dde77c66beb1\") " pod="kube-system/kindnet-9hllk"
	Dec 13 09:09:34 old-k8s-version-234538 kubelet[1386]: I1213 09:09:34.586091    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4da661d0-6c2a-4cd4-afbe-c9e7f4f70f3e-lib-modules\") pod \"kube-proxy-6bkvj\" (UID: \"4da661d0-6c2a-4cd4-afbe-c9e7f4f70f3e\") " pod="kube-system/kube-proxy-6bkvj"
	Dec 13 09:09:34 old-k8s-version-234538 kubelet[1386]: E1213 09:09:34.703545    1386 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 13 09:09:34 old-k8s-version-234538 kubelet[1386]: E1213 09:09:34.703737    1386 projected.go:198] Error preparing data for projected volume kube-api-access-qq6tw for pod kube-system/kindnet-9hllk: configmap "kube-root-ca.crt" not found
	Dec 13 09:09:34 old-k8s-version-234538 kubelet[1386]: E1213 09:09:34.703839    1386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7646b3b-7180-4ac3-b998-dde77c66beb1-kube-api-access-qq6tw podName:e7646b3b-7180-4ac3-b998-dde77c66beb1 nodeName:}" failed. No retries permitted until 2025-12-13 09:09:35.203814502 +0000 UTC m=+13.084441177 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qq6tw" (UniqueName: "kubernetes.io/projected/e7646b3b-7180-4ac3-b998-dde77c66beb1-kube-api-access-qq6tw") pod "kindnet-9hllk" (UID: "e7646b3b-7180-4ac3-b998-dde77c66beb1") : configmap "kube-root-ca.crt" not found
	Dec 13 09:09:34 old-k8s-version-234538 kubelet[1386]: E1213 09:09:34.703548    1386 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 13 09:09:34 old-k8s-version-234538 kubelet[1386]: E1213 09:09:34.703889    1386 projected.go:198] Error preparing data for projected volume kube-api-access-82wl5 for pod kube-system/kube-proxy-6bkvj: configmap "kube-root-ca.crt" not found
	Dec 13 09:09:34 old-k8s-version-234538 kubelet[1386]: E1213 09:09:34.703983    1386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4da661d0-6c2a-4cd4-afbe-c9e7f4f70f3e-kube-api-access-82wl5 podName:4da661d0-6c2a-4cd4-afbe-c9e7f4f70f3e nodeName:}" failed. No retries permitted until 2025-12-13 09:09:35.203954849 +0000 UTC m=+13.084581528 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-82wl5" (UniqueName: "kubernetes.io/projected/4da661d0-6c2a-4cd4-afbe-c9e7f4f70f3e-kube-api-access-82wl5") pod "kube-proxy-6bkvj" (UID: "4da661d0-6c2a-4cd4-afbe-c9e7f4f70f3e") : configmap "kube-root-ca.crt" not found
	Dec 13 09:09:37 old-k8s-version-234538 kubelet[1386]: I1213 09:09:37.276474    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-9hllk" podStartSLOduration=1.9831707939999998 podCreationTimestamp="2025-12-13 09:09:34 +0000 UTC" firstStartedPulling="2025-12-13 09:09:35.487231798 +0000 UTC m=+13.367858468" lastFinishedPulling="2025-12-13 09:09:36.780477981 +0000 UTC m=+14.661104662" observedRunningTime="2025-12-13 09:09:37.27631573 +0000 UTC m=+15.156942409" watchObservedRunningTime="2025-12-13 09:09:37.276416988 +0000 UTC m=+15.157043649"
	Dec 13 09:09:37 old-k8s-version-234538 kubelet[1386]: I1213 09:09:37.276811    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6bkvj" podStartSLOduration=3.27677196 podCreationTimestamp="2025-12-13 09:09:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:09:36.279697854 +0000 UTC m=+14.160324532" watchObservedRunningTime="2025-12-13 09:09:37.27677196 +0000 UTC m=+15.157398642"
	Dec 13 09:09:47 old-k8s-version-234538 kubelet[1386]: I1213 09:09:47.680976    1386 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 13 09:09:47 old-k8s-version-234538 kubelet[1386]: I1213 09:09:47.704295    1386 topology_manager.go:215] "Topology Admit Handler" podUID="7daa29c1-ccc4-41e3-883f-1d7875be09c8" podNamespace="kube-system" podName="storage-provisioner"
	Dec 13 09:09:47 old-k8s-version-234538 kubelet[1386]: I1213 09:09:47.705482    1386 topology_manager.go:215] "Topology Admit Handler" podUID="a153fd3d-a4bb-4bda-9a1f-94a1f6d6b5f7" podNamespace="kube-system" podName="coredns-5dd5756b68-g66tb"
	Dec 13 09:09:47 old-k8s-version-234538 kubelet[1386]: I1213 09:09:47.785083    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdfzx\" (UniqueName: \"kubernetes.io/projected/a153fd3d-a4bb-4bda-9a1f-94a1f6d6b5f7-kube-api-access-jdfzx\") pod \"coredns-5dd5756b68-g66tb\" (UID: \"a153fd3d-a4bb-4bda-9a1f-94a1f6d6b5f7\") " pod="kube-system/coredns-5dd5756b68-g66tb"
	Dec 13 09:09:47 old-k8s-version-234538 kubelet[1386]: I1213 09:09:47.785167    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-799d7\" (UniqueName: \"kubernetes.io/projected/7daa29c1-ccc4-41e3-883f-1d7875be09c8-kube-api-access-799d7\") pod \"storage-provisioner\" (UID: \"7daa29c1-ccc4-41e3-883f-1d7875be09c8\") " pod="kube-system/storage-provisioner"
	Dec 13 09:09:47 old-k8s-version-234538 kubelet[1386]: I1213 09:09:47.785288    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a153fd3d-a4bb-4bda-9a1f-94a1f6d6b5f7-config-volume\") pod \"coredns-5dd5756b68-g66tb\" (UID: \"a153fd3d-a4bb-4bda-9a1f-94a1f6d6b5f7\") " pod="kube-system/coredns-5dd5756b68-g66tb"
	Dec 13 09:09:47 old-k8s-version-234538 kubelet[1386]: I1213 09:09:47.785345    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7daa29c1-ccc4-41e3-883f-1d7875be09c8-tmp\") pod \"storage-provisioner\" (UID: \"7daa29c1-ccc4-41e3-883f-1d7875be09c8\") " pod="kube-system/storage-provisioner"
	Dec 13 09:09:48 old-k8s-version-234538 kubelet[1386]: I1213 09:09:48.303226    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-g66tb" podStartSLOduration=13.303170142 podCreationTimestamp="2025-12-13 09:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:09:48.302879892 +0000 UTC m=+26.183506570" watchObservedRunningTime="2025-12-13 09:09:48.303170142 +0000 UTC m=+26.183796821"
	Dec 13 09:09:48 old-k8s-version-234538 kubelet[1386]: I1213 09:09:48.313635    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.313584397 podCreationTimestamp="2025-12-13 09:09:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:09:48.313287046 +0000 UTC m=+26.193913731" watchObservedRunningTime="2025-12-13 09:09:48.313584397 +0000 UTC m=+26.194211076"
	Dec 13 09:09:50 old-k8s-version-234538 kubelet[1386]: I1213 09:09:50.244125    1386 topology_manager.go:215] "Topology Admit Handler" podUID="70491080-fd46-4699-bfa3-ed5f7e53ce0f" podNamespace="default" podName="busybox"
	Dec 13 09:09:50 old-k8s-version-234538 kubelet[1386]: I1213 09:09:50.300694    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ghrr\" (UniqueName: \"kubernetes.io/projected/70491080-fd46-4699-bfa3-ed5f7e53ce0f-kube-api-access-9ghrr\") pod \"busybox\" (UID: \"70491080-fd46-4699-bfa3-ed5f7e53ce0f\") " pod="default/busybox"
	Dec 13 09:09:52 old-k8s-version-234538 kubelet[1386]: I1213 09:09:52.311598    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.044610491 podCreationTimestamp="2025-12-13 09:09:50 +0000 UTC" firstStartedPulling="2025-12-13 09:09:50.56391932 +0000 UTC m=+28.444545990" lastFinishedPulling="2025-12-13 09:09:51.830861458 +0000 UTC m=+29.711488119" observedRunningTime="2025-12-13 09:09:52.311332176 +0000 UTC m=+30.191958855" watchObservedRunningTime="2025-12-13 09:09:52.31155262 +0000 UTC m=+30.192179298"
	
	
	==> storage-provisioner [cadf23fe0c5acdb7725b3e547408065b39ec2ee23824864b292d14cd67742eb9] <==
	I1213 09:09:48.073855       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:09:48.084799       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:09:48.085571       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 09:09:48.096624       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 09:09:48.096911       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-234538_34fdeb52-6719-4b94-9390-309746162398!
	I1213 09:09:48.097126       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"60b54cd0-ddd8-481a-8123-7f67477a3495", APIVersion:"v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-234538_34fdeb52-6719-4b94-9390-309746162398 became leader
	I1213 09:09:48.197909       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-234538_34fdeb52-6719-4b94-9390-309746162398!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-234538 -n old-k8s-version-234538
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-234538 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-379362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-379362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (293.326883ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:10:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-379362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-379362 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-379362 describe deploy/metrics-server -n kube-system: exit status 1 (75.270784ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-379362 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-379362
helpers_test.go:244: (dbg) docker inspect embed-certs-379362:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718",
	        "Created": "2025-12-13T09:09:57.972253088Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 319823,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:09:58.011267105Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718/hostname",
	        "HostsPath": "/var/lib/docker/containers/546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718/hosts",
	        "LogPath": "/var/lib/docker/containers/546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718/546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718-json.log",
	        "Name": "/embed-certs-379362",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-379362:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-379362",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718",
	                "LowerDir": "/var/lib/docker/overlay2/333a3f34b482c4011994b7785a89d76fb974d8e30de782a7f6d93af42a245744-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/333a3f34b482c4011994b7785a89d76fb974d8e30de782a7f6d93af42a245744/merged",
	                "UpperDir": "/var/lib/docker/overlay2/333a3f34b482c4011994b7785a89d76fb974d8e30de782a7f6d93af42a245744/diff",
	                "WorkDir": "/var/lib/docker/overlay2/333a3f34b482c4011994b7785a89d76fb974d8e30de782a7f6d93af42a245744/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-379362",
	                "Source": "/var/lib/docker/volumes/embed-certs-379362/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-379362",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-379362",
	                "name.minikube.sigs.k8s.io": "embed-certs-379362",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "76c5681269d52505b4503f177b028d93f39555e3df816f90a6bea97cdaccd45e",
	            "SandboxKey": "/var/run/docker/netns/76c5681269d5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-379362": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f3a87dafd473cb389c587dfde4fe3ed60a013e0268e1a1ec6ca1f8d2969aaec6",
	                    "EndpointID": "cd334f4a6f2e3e2471d757c802d50b01dfa62f2720652afba91dc455e982281e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "be:e0:1e:56:70:86",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-379362",
	                        "546452572cf4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-379362 -n embed-certs-379362
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-379362 logs -n 25
E1213 09:10:42.540534    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/auto-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-379362 logs -n 25: (1.251103299s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-833990 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ ssh     │ -p bridge-833990 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo containerd config dump                                                                                                                                                                                                  │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo crio config                                                                                                                                                                                                             │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ delete  │ -p bridge-833990                                                                                                                                                                                                                              │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-291522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-234538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ stop    │ -p no-preload-291522 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:10 UTC │
	│ stop    │ -p old-k8s-version-234538 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ addons  │ enable dashboard -p no-preload-291522 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p no-preload-291522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-234538 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p old-k8s-version-234538 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-814560                                                                                                                                                                                                                  │ kubernetes-upgrade-814560    │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ delete  │ -p disable-driver-mounts-779931                                                                                                                                                                                                               │ disable-driver-mounts-779931 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-379362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:10:28
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:10:28.347132  328914 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:10:28.347394  328914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:10:28.347403  328914 out.go:374] Setting ErrFile to fd 2...
	I1213 09:10:28.347406  328914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:10:28.347632  328914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:10:28.348271  328914 out.go:368] Setting JSON to false
	I1213 09:10:28.349686  328914 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3180,"bootTime":1765613848,"procs":357,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:10:28.349741  328914 start.go:143] virtualization: kvm guest
	I1213 09:10:28.351635  328914 out.go:179] * [default-k8s-diff-port-361270] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:10:28.353137  328914 notify.go:221] Checking for updates...
	I1213 09:10:28.353151  328914 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:10:28.354608  328914 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:10:28.357250  328914 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:10:28.358537  328914 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:10:28.359735  328914 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:10:28.360924  328914 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:10:28.362876  328914 config.go:182] Loaded profile config "embed-certs-379362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:10:28.363017  328914 config.go:182] Loaded profile config "no-preload-291522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:10:28.363140  328914 config.go:182] Loaded profile config "old-k8s-version-234538": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 09:10:28.363302  328914 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:10:28.392222  328914 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 09:10:28.392335  328914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:10:28.456157  328914 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-13 09:10:28.444435702 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:10:28.456296  328914 docker.go:319] overlay module found
	I1213 09:10:28.458945  328914 out.go:179] * Using the docker driver based on user configuration
	I1213 09:10:28.460142  328914 start.go:309] selected driver: docker
	I1213 09:10:28.460160  328914 start.go:927] validating driver "docker" against <nil>
	I1213 09:10:28.460175  328914 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:10:28.460981  328914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:10:28.516676  328914 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-13 09:10:28.507423489 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:10:28.516863  328914 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 09:10:28.517083  328914 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:10:28.518717  328914 out.go:179] * Using Docker driver with root privileges
	I1213 09:10:28.519696  328914 cni.go:84] Creating CNI manager for ""
	I1213 09:10:28.519771  328914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:10:28.519784  328914 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 09:10:28.519854  328914 start.go:353] cluster config:
	{Name:default-k8s-diff-port-361270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-361270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:10:28.521186  328914 out.go:179] * Starting "default-k8s-diff-port-361270" primary control-plane node in "default-k8s-diff-port-361270" cluster
	I1213 09:10:28.522347  328914 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 09:10:28.523412  328914 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:10:28.524552  328914 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:10:28.524589  328914 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 09:10:28.524601  328914 cache.go:65] Caching tarball of preloaded images
	I1213 09:10:28.524657  328914 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:10:28.524754  328914 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:10:28.524790  328914 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 09:10:28.524924  328914 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/config.json ...
	I1213 09:10:28.524955  328914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/config.json: {Name:mk25cb4509838c6aba2b210263c30c50c1eb870c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:10:28.545694  328914 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:10:28.545714  328914 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:10:28.545734  328914 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:10:28.545775  328914 start.go:360] acquireMachinesLock for default-k8s-diff-port-361270: {Name:mk449517ae35c4f56ad4dd7a617f6d17b6cb11de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:10:28.545888  328914 start.go:364] duration metric: took 93.492µs to acquireMachinesLock for "default-k8s-diff-port-361270"
	I1213 09:10:28.545918  328914 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-361270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-361270 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:10:28.546016  328914 start.go:125] createHost starting for "" (driver="docker")
	I1213 09:10:28.817218  318834 system_pods.go:86] 8 kube-system pods found
	I1213 09:10:28.817254  318834 system_pods.go:89] "coredns-66bc5c9577-24vtj" [8986d496-b2cb-429d-80ec-2f326920e440] Running
	I1213 09:10:28.817262  318834 system_pods.go:89] "etcd-embed-certs-379362" [cfdea667-b08a-4d24-b7f4-0fe21dbc5388] Running
	I1213 09:10:28.817268  318834 system_pods.go:89] "kindnet-4vk4d" [23fa27ce-887f-4910-af8d-74b11ea2df32] Running
	I1213 09:10:28.817275  318834 system_pods.go:89] "kube-apiserver-embed-certs-379362" [24a409bb-590d-4ac2-9246-7dba3fc3f946] Running
	I1213 09:10:28.817281  318834 system_pods.go:89] "kube-controller-manager-embed-certs-379362" [77968fd1-b384-4df9-86bd-289d910ba778] Running
	I1213 09:10:28.817284  318834 system_pods.go:89] "kube-proxy-zmtpb" [c6bfb114-7843-46f4-8244-db73b00b7e6a] Running
	I1213 09:10:28.817295  318834 system_pods.go:89] "kube-scheduler-embed-certs-379362" [eb180ea3-0cfe-44f4-a995-7612e63240ea] Running
	I1213 09:10:28.817304  318834 system_pods.go:89] "storage-provisioner" [937cc208-1949-4660-a328-292224786f1b] Running
	I1213 09:10:28.817312  318834 system_pods.go:126] duration metric: took 1.405702583s to wait for k8s-apps to be running ...
	I1213 09:10:28.817320  318834 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 09:10:28.817359  318834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:10:28.831260  318834 system_svc.go:56] duration metric: took 13.931025ms WaitForService to wait for kubelet
	I1213 09:10:28.831284  318834 kubeadm.go:587] duration metric: took 12.823130377s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:10:28.831301  318834 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:10:28.834390  318834 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:10:28.834413  318834 node_conditions.go:123] node cpu capacity is 8
	I1213 09:10:28.834428  318834 node_conditions.go:105] duration metric: took 3.122733ms to run NodePressure ...
	I1213 09:10:28.834443  318834 start.go:242] waiting for startup goroutines ...
	I1213 09:10:28.834463  318834 start.go:247] waiting for cluster config update ...
	I1213 09:10:28.834472  318834 start.go:256] writing updated cluster config ...
	I1213 09:10:28.834740  318834 ssh_runner.go:195] Run: rm -f paused
	I1213 09:10:28.839201  318834 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:10:28.843425  318834 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-24vtj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:10:28.848478  318834 pod_ready.go:94] pod "coredns-66bc5c9577-24vtj" is "Ready"
	I1213 09:10:28.848513  318834 pod_ready.go:86] duration metric: took 5.068377ms for pod "coredns-66bc5c9577-24vtj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:10:28.853549  318834 pod_ready.go:83] waiting for pod "etcd-embed-certs-379362" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:10:28.857622  318834 pod_ready.go:94] pod "etcd-embed-certs-379362" is "Ready"
	I1213 09:10:28.857646  318834 pod_ready.go:86] duration metric: took 4.074598ms for pod "etcd-embed-certs-379362" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:10:28.859637  318834 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-379362" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:10:28.863751  318834 pod_ready.go:94] pod "kube-apiserver-embed-certs-379362" is "Ready"
	I1213 09:10:28.863768  318834 pod_ready.go:86] duration metric: took 4.113541ms for pod "kube-apiserver-embed-certs-379362" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:10:28.865533  318834 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-379362" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:10:29.244811  318834 pod_ready.go:94] pod "kube-controller-manager-embed-certs-379362" is "Ready"
	I1213 09:10:29.244840  318834 pod_ready.go:86] duration metric: took 379.286822ms for pod "kube-controller-manager-embed-certs-379362" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:10:29.445392  318834 pod_ready.go:83] waiting for pod "kube-proxy-zmtpb" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:10:29.844738  318834 pod_ready.go:94] pod "kube-proxy-zmtpb" is "Ready"
	I1213 09:10:29.844766  318834 pod_ready.go:86] duration metric: took 399.319366ms for pod "kube-proxy-zmtpb" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:10:30.045046  318834 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-379362" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:10:30.444303  318834 pod_ready.go:94] pod "kube-scheduler-embed-certs-379362" is "Ready"
	I1213 09:10:30.444332  318834 pod_ready.go:86] duration metric: took 399.251405ms for pod "kube-scheduler-embed-certs-379362" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:10:30.444347  318834 pod_ready.go:40] duration metric: took 1.605106423s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:10:30.507109  318834 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 09:10:30.510806  318834 out.go:179] * Done! kubectl is now configured to use "embed-certs-379362" cluster and "default" namespace by default
	I1213 09:10:25.910767  323665 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:10:25.915995  323665 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1213 09:10:25.917102  323665 api_server.go:141] control plane version: v1.35.0-beta.0
	I1213 09:10:25.917131  323665 api_server.go:131] duration metric: took 507.301644ms to wait for apiserver health ...
	I1213 09:10:25.917142  323665 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:10:25.920840  323665 system_pods.go:59] 8 kube-system pods found
	I1213 09:10:25.920886  323665 system_pods.go:61] "coredns-7d764666f9-r95cr" [04be029a-867d-492e-9950-26ff6399fa3b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:10:25.920898  323665 system_pods.go:61] "etcd-no-preload-291522" [d48b22f6-00c7-4070-b335-99d2873a9aa9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:10:25.920911  323665 system_pods.go:61] "kindnet-sm6z6" [fc83086c-0e3b-4fb8-970e-573f20d37433] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 09:10:25.920940  323665 system_pods.go:61] "kube-apiserver-no-preload-291522" [45948da8-07f1-4231-8216-0fcd7fda0da0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:10:25.920950  323665 system_pods.go:61] "kube-controller-manager-no-preload-291522" [2ea242b1-c335-4a70-b968-840081ddbf90] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:10:25.920958  323665 system_pods.go:61] "kube-proxy-ktgbz" [b0bac974-dddb-4c41-8c17-b9f35a3e918a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 09:10:25.920970  323665 system_pods.go:61] "kube-scheduler-no-preload-291522" [b2a375a9-74c5-462e-94f4-225945355eaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:10:25.920978  323665 system_pods.go:61] "storage-provisioner" [51f37d10-278f-4664-bda2-35093a1fccb5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:10:25.920986  323665 system_pods.go:74] duration metric: took 3.837434ms to wait for pod list to return data ...
	I1213 09:10:25.920996  323665 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:10:25.929556  323665 default_sa.go:45] found service account: "default"
	I1213 09:10:25.929581  323665 default_sa.go:55] duration metric: took 8.577633ms for default service account to be created ...
	I1213 09:10:25.929592  323665 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 09:10:26.021233  323665 system_pods.go:86] 8 kube-system pods found
	I1213 09:10:26.021346  323665 system_pods.go:89] "coredns-7d764666f9-r95cr" [04be029a-867d-492e-9950-26ff6399fa3b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:10:26.021367  323665 system_pods.go:89] "etcd-no-preload-291522" [d48b22f6-00c7-4070-b335-99d2873a9aa9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:10:26.021382  323665 system_pods.go:89] "kindnet-sm6z6" [fc83086c-0e3b-4fb8-970e-573f20d37433] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 09:10:26.021392  323665 system_pods.go:89] "kube-apiserver-no-preload-291522" [45948da8-07f1-4231-8216-0fcd7fda0da0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:10:26.021410  323665 system_pods.go:89] "kube-controller-manager-no-preload-291522" [2ea242b1-c335-4a70-b968-840081ddbf90] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:10:26.021419  323665 system_pods.go:89] "kube-proxy-ktgbz" [b0bac974-dddb-4c41-8c17-b9f35a3e918a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 09:10:26.021428  323665 system_pods.go:89] "kube-scheduler-no-preload-291522" [b2a375a9-74c5-462e-94f4-225945355eaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:10:26.021437  323665 system_pods.go:89] "storage-provisioner" [51f37d10-278f-4664-bda2-35093a1fccb5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:10:26.021447  323665 system_pods.go:126] duration metric: took 91.848323ms to wait for k8s-apps to be running ...
	I1213 09:10:26.021458  323665 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 09:10:26.021520  323665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:10:26.038954  323665 system_svc.go:56] duration metric: took 17.480337ms WaitForService to wait for kubelet
	I1213 09:10:26.038986  323665 kubeadm.go:587] duration metric: took 3.035737182s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:10:26.039008  323665 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:10:26.042082  323665 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:10:26.042111  323665 node_conditions.go:123] node cpu capacity is 8
	I1213 09:10:26.042130  323665 node_conditions.go:105] duration metric: took 3.115882ms to run NodePressure ...
	I1213 09:10:26.042145  323665 start.go:242] waiting for startup goroutines ...
	I1213 09:10:26.042156  323665 start.go:247] waiting for cluster config update ...
	I1213 09:10:26.042171  323665 start.go:256] writing updated cluster config ...
	I1213 09:10:26.042455  323665 ssh_runner.go:195] Run: rm -f paused
	I1213 09:10:26.047597  323665 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:10:26.120344  323665 pod_ready.go:83] waiting for pod "coredns-7d764666f9-r95cr" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 09:10:28.125643  323665 pod_ready.go:104] pod "coredns-7d764666f9-r95cr" is not "Ready", error: <nil>
	W1213 09:10:30.126285  323665 pod_ready.go:104] pod "coredns-7d764666f9-r95cr" is not "Ready", error: <nil>
	I1213 09:10:27.832418  324697 default_sa.go:45] found service account: "default"
	I1213 09:10:27.832441  324697 default_sa.go:55] duration metric: took 3.041449ms for default service account to be created ...
	I1213 09:10:27.832452  324697 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 09:10:27.832825  324697 addons.go:530] duration metric: took 3.110558994s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1213 09:10:27.835626  324697 system_pods.go:86] 8 kube-system pods found
	I1213 09:10:27.835657  324697 system_pods.go:89] "coredns-5dd5756b68-g66tb" [a153fd3d-a4bb-4bda-9a1f-94a1f6d6b5f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:10:27.835669  324697 system_pods.go:89] "etcd-old-k8s-version-234538" [e90ae543-4ebd-44fd-92a0-84aca07dde9f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:10:27.835677  324697 system_pods.go:89] "kindnet-9hllk" [e7646b3b-7180-4ac3-b998-dde77c66beb1] Running
	I1213 09:10:27.835693  324697 system_pods.go:89] "kube-apiserver-old-k8s-version-234538" [6943378c-bd27-4424-a4aa-88868cb57eda] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:10:27.835702  324697 system_pods.go:89] "kube-controller-manager-old-k8s-version-234538" [46dddd2c-57fa-42eb-b4c7-ad526dac7bca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:10:27.835708  324697 system_pods.go:89] "kube-proxy-6bkvj" [4da661d0-6c2a-4cd4-afbe-c9e7f4f70f3e] Running
	I1213 09:10:27.835716  324697 system_pods.go:89] "kube-scheduler-old-k8s-version-234538" [a0c27bbb-e6ac-4548-889f-7fb70be2f761] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:10:27.835724  324697 system_pods.go:89] "storage-provisioner" [7daa29c1-ccc4-41e3-883f-1d7875be09c8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:10:27.835745  324697 system_pods.go:126] duration metric: took 3.287717ms to wait for k8s-apps to be running ...
	I1213 09:10:27.835757  324697 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 09:10:27.835807  324697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:10:27.851112  324697 system_svc.go:56] duration metric: took 15.344541ms WaitForService to wait for kubelet
	I1213 09:10:27.851145  324697 kubeadm.go:587] duration metric: took 3.128748664s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:10:27.851167  324697 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:10:27.857150  324697 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:10:27.857180  324697 node_conditions.go:123] node cpu capacity is 8
	I1213 09:10:27.857207  324697 node_conditions.go:105] duration metric: took 6.034035ms to run NodePressure ...
	I1213 09:10:27.857222  324697 start.go:242] waiting for startup goroutines ...
	I1213 09:10:27.857232  324697 start.go:247] waiting for cluster config update ...
	I1213 09:10:27.857250  324697 start.go:256] writing updated cluster config ...
	I1213 09:10:27.857626  324697 ssh_runner.go:195] Run: rm -f paused
	I1213 09:10:27.862594  324697 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:10:27.866946  324697 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-g66tb" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 09:10:29.874272  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	I1213 09:10:28.547810  328914 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1213 09:10:28.548037  328914 start.go:159] libmachine.API.Create for "default-k8s-diff-port-361270" (driver="docker")
	I1213 09:10:28.548071  328914 client.go:173] LocalClient.Create starting
	I1213 09:10:28.548133  328914 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem
	I1213 09:10:28.548170  328914 main.go:143] libmachine: Decoding PEM data...
	I1213 09:10:28.548199  328914 main.go:143] libmachine: Parsing certificate...
	I1213 09:10:28.548274  328914 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem
	I1213 09:10:28.548307  328914 main.go:143] libmachine: Decoding PEM data...
	I1213 09:10:28.548327  328914 main.go:143] libmachine: Parsing certificate...
	I1213 09:10:28.548713  328914 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-361270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 09:10:28.566310  328914 cli_runner.go:211] docker network inspect default-k8s-diff-port-361270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 09:10:28.566380  328914 network_create.go:284] running [docker network inspect default-k8s-diff-port-361270] to gather additional debugging logs...
	I1213 09:10:28.566405  328914 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-361270
	W1213 09:10:28.584796  328914 cli_runner.go:211] docker network inspect default-k8s-diff-port-361270 returned with exit code 1
	I1213 09:10:28.584841  328914 network_create.go:287] error running [docker network inspect default-k8s-diff-port-361270]: docker network inspect default-k8s-diff-port-361270: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-361270 not found
	I1213 09:10:28.584861  328914 network_create.go:289] output of [docker network inspect default-k8s-diff-port-361270]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-361270 not found
	
	** /stderr **
	I1213 09:10:28.585023  328914 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:10:28.611370  328914 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b9f57735373a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:3a:37:6d:21:84} reservation:<nil>}
	I1213 09:10:28.612677  328914 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6ee6a6cb099f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9e:13:d6:80:5b:9d} reservation:<nil>}
	I1213 09:10:28.613679  328914 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9c992914162b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:1e:b1:9a:07:84:35} reservation:<nil>}
	I1213 09:10:28.614377  328914 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-03cd5b8c21be IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:56:97:ed:da:2b:60} reservation:<nil>}
	I1213 09:10:28.615362  328914 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f3a87dafd473 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:d6:80:c0:18:32:ac} reservation:<nil>}
	I1213 09:10:28.616160  328914 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-dfb09ffcf677 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:4a:d3:6c:56:d0:2f} reservation:<nil>}
	I1213 09:10:28.617457  328914 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f68c70}
	I1213 09:10:28.617497  328914 network_create.go:124] attempt to create docker network default-k8s-diff-port-361270 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1213 09:10:28.617574  328914 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-361270 default-k8s-diff-port-361270
	I1213 09:10:28.682705  328914 network_create.go:108] docker network default-k8s-diff-port-361270 192.168.103.0/24 created
	I1213 09:10:28.682742  328914 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-361270" container
	I1213 09:10:28.682809  328914 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 09:10:28.703368  328914 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-361270 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-361270 --label created_by.minikube.sigs.k8s.io=true
	I1213 09:10:28.720703  328914 oci.go:103] Successfully created a docker volume default-k8s-diff-port-361270
	I1213 09:10:28.720774  328914 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-361270-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-361270 --entrypoint /usr/bin/test -v default-k8s-diff-port-361270:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
	I1213 09:10:29.160376  328914 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-361270
	I1213 09:10:29.160468  328914 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:10:29.160495  328914 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 09:10:29.160580  328914 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-361270:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
	W1213 09:10:32.787384  323665 pod_ready.go:104] pod "coredns-7d764666f9-r95cr" is not "Ready", error: <nil>
	W1213 09:10:35.128946  323665 pod_ready.go:104] pod "coredns-7d764666f9-r95cr" is not "Ready", error: <nil>
	W1213 09:10:32.372545  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	W1213 09:10:34.373398  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	W1213 09:10:36.872537  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	I1213 09:10:34.294701  328914 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-361270:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (5.134059053s)
	I1213 09:10:34.294737  328914 kic.go:203] duration metric: took 5.134250585s to extract preloaded images to volume ...
	W1213 09:10:34.294847  328914 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1213 09:10:34.294892  328914 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1213 09:10:34.294942  328914 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 09:10:34.371791  328914 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-361270 --name default-k8s-diff-port-361270 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-361270 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-361270 --network default-k8s-diff-port-361270 --ip 192.168.103.2 --volume default-k8s-diff-port-361270:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
	I1213 09:10:34.864551  328914 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-361270 --format={{.State.Running}}
	I1213 09:10:34.886115  328914 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-361270 --format={{.State.Status}}
	I1213 09:10:34.905808  328914 cli_runner.go:164] Run: docker exec default-k8s-diff-port-361270 stat /var/lib/dpkg/alternatives/iptables
	I1213 09:10:34.955752  328914 oci.go:144] the created container "default-k8s-diff-port-361270" has a running status.
	I1213 09:10:34.955785  328914 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22128-5776/.minikube/machines/default-k8s-diff-port-361270/id_rsa...
	I1213 09:10:35.262697  328914 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22128-5776/.minikube/machines/default-k8s-diff-port-361270/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 09:10:35.364468  328914 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-361270 --format={{.State.Status}}
	I1213 09:10:35.389585  328914 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 09:10:35.389605  328914 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-361270 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 09:10:35.451632  328914 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-361270 --format={{.State.Status}}
	I1213 09:10:35.475446  328914 machine.go:94] provisionDockerMachine start ...
	I1213 09:10:35.475680  328914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:10:35.498193  328914 main.go:143] libmachine: Using SSH client type: native
	I1213 09:10:35.498588  328914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1213 09:10:35.498608  328914 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:10:35.649011  328914 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-361270
	
	I1213 09:10:35.649041  328914 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-361270"
	I1213 09:10:35.649113  328914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:10:35.673679  328914 main.go:143] libmachine: Using SSH client type: native
	I1213 09:10:35.674011  328914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1213 09:10:35.674043  328914 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-361270 && echo "default-k8s-diff-port-361270" | sudo tee /etc/hostname
	I1213 09:10:35.832702  328914 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-361270
	
	I1213 09:10:35.832777  328914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:10:35.853233  328914 main.go:143] libmachine: Using SSH client type: native
	I1213 09:10:35.853449  328914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1213 09:10:35.853473  328914 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-361270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-361270/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-361270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:10:35.992761  328914 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:10:35.992806  328914 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-5776/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-5776/.minikube}
	I1213 09:10:35.992849  328914 ubuntu.go:190] setting up certificates
	I1213 09:10:35.992867  328914 provision.go:84] configureAuth start
	I1213 09:10:35.992932  328914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-361270
	I1213 09:10:36.012357  328914 provision.go:143] copyHostCerts
	I1213 09:10:36.012421  328914 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem, removing ...
	I1213 09:10:36.012434  328914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem
	I1213 09:10:36.012482  328914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem (1082 bytes)
	I1213 09:10:36.012654  328914 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem, removing ...
	I1213 09:10:36.012678  328914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem
	I1213 09:10:36.013226  328914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem (1123 bytes)
	I1213 09:10:36.013310  328914 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem, removing ...
	I1213 09:10:36.013316  328914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem
	I1213 09:10:36.013345  328914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem (1675 bytes)
	I1213 09:10:36.013399  328914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-361270 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-361270 localhost minikube]
	I1213 09:10:36.051828  328914 provision.go:177] copyRemoteCerts
	I1213 09:10:36.051878  328914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:10:36.051925  328914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:10:36.071573  328914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/default-k8s-diff-port-361270/id_rsa Username:docker}
	I1213 09:10:36.169807  328914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:10:36.190304  328914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1213 09:10:36.210045  328914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:10:36.228045  328914 provision.go:87] duration metric: took 235.159053ms to configureAuth
	I1213 09:10:36.228068  328914 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:10:36.228221  328914 config.go:182] Loaded profile config "default-k8s-diff-port-361270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:10:36.228316  328914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:10:36.246732  328914 main.go:143] libmachine: Using SSH client type: native
	I1213 09:10:36.246954  328914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1213 09:10:36.246971  328914 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 09:10:36.528069  328914 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 09:10:36.528102  328914 machine.go:97] duration metric: took 1.052534816s to provisionDockerMachine
	I1213 09:10:36.528115  328914 client.go:176] duration metric: took 7.980036999s to LocalClient.Create
	I1213 09:10:36.528128  328914 start.go:167] duration metric: took 7.980090885s to libmachine.API.Create "default-k8s-diff-port-361270"
	I1213 09:10:36.528137  328914 start.go:293] postStartSetup for "default-k8s-diff-port-361270" (driver="docker")
	I1213 09:10:36.528152  328914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:10:36.528224  328914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:10:36.528266  328914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:10:36.546137  328914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/default-k8s-diff-port-361270/id_rsa Username:docker}
	I1213 09:10:36.645116  328914 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:10:36.648850  328914 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:10:36.648882  328914 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:10:36.648894  328914 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/addons for local assets ...
	I1213 09:10:36.648947  328914 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/files for local assets ...
	I1213 09:10:36.649042  328914 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem -> 93032.pem in /etc/ssl/certs
	I1213 09:10:36.649146  328914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:10:36.657651  328914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:10:36.678919  328914 start.go:296] duration metric: took 150.764775ms for postStartSetup
	I1213 09:10:36.679248  328914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-361270
	I1213 09:10:36.697580  328914 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/config.json ...
	I1213 09:10:36.697887  328914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:10:36.697939  328914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:10:36.716140  328914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/default-k8s-diff-port-361270/id_rsa Username:docker}
	I1213 09:10:36.810761  328914 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:10:36.815817  328914 start.go:128] duration metric: took 8.269785769s to createHost
	I1213 09:10:36.815842  328914 start.go:83] releasing machines lock for "default-k8s-diff-port-361270", held for 8.269941231s
	I1213 09:10:36.815913  328914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-361270
	I1213 09:10:36.835551  328914 ssh_runner.go:195] Run: cat /version.json
	I1213 09:10:36.835611  328914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:10:36.835641  328914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:10:36.835718  328914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:10:36.853652  328914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/default-k8s-diff-port-361270/id_rsa Username:docker}
	I1213 09:10:36.853938  328914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/default-k8s-diff-port-361270/id_rsa Username:docker}
	I1213 09:10:37.002783  328914 ssh_runner.go:195] Run: systemctl --version
	I1213 09:10:37.009598  328914 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 09:10:37.044378  328914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:10:37.049141  328914 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:10:37.049214  328914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:10:37.075116  328914 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1213 09:10:37.075140  328914 start.go:496] detecting cgroup driver to use...
	I1213 09:10:37.075169  328914 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 09:10:37.075220  328914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:10:37.091272  328914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:10:37.103444  328914 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:10:37.103514  328914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:10:37.120226  328914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:10:37.139394  328914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:10:37.225377  328914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:10:37.322237  328914 docker.go:234] disabling docker service ...
	I1213 09:10:37.322297  328914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:10:37.340989  328914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:10:37.354578  328914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:10:37.445676  328914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:10:37.536367  328914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:10:37.549195  328914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:10:37.563980  328914 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 09:10:37.564049  328914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:10:37.576065  328914 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 09:10:37.576121  328914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:10:37.586142  328914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:10:37.594731  328914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:10:37.603593  328914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:10:37.611728  328914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:10:37.620046  328914 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:10:37.634076  328914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:10:37.643010  328914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:10:37.650362  328914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:10:37.657829  328914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:10:37.735158  328914 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 09:10:37.967970  328914 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 09:10:37.968036  328914 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 09:10:37.972538  328914 start.go:564] Will wait 60s for crictl version
	I1213 09:10:37.972600  328914 ssh_runner.go:195] Run: which crictl
	I1213 09:10:37.976355  328914 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:10:38.000565  328914 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 09:10:38.000641  328914 ssh_runner.go:195] Run: crio --version
	I1213 09:10:38.028421  328914 ssh_runner.go:195] Run: crio --version
	I1213 09:10:38.057516  328914 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 09:10:38.058672  328914 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-361270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:10:38.076417  328914 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1213 09:10:38.081149  328914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:10:38.092956  328914 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-361270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-361270 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:10:38.093101  328914 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:10:38.093158  328914 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:10:38.127721  328914 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:10:38.127740  328914 crio.go:433] Images already preloaded, skipping extraction
	I1213 09:10:38.127781  328914 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:10:38.153858  328914 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:10:38.153880  328914 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:10:38.153888  328914 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.2 crio true true} ...
	I1213 09:10:38.153972  328914 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-361270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-361270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:10:38.154036  328914 ssh_runner.go:195] Run: crio config
	I1213 09:10:38.199227  328914 cni.go:84] Creating CNI manager for ""
	I1213 09:10:38.199246  328914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:10:38.199262  328914 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:10:38.199282  328914 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-361270 NodeName:default-k8s-diff-port-361270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:10:38.199398  328914 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-361270"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:10:38.199455  328914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 09:10:38.209852  328914 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:10:38.209909  328914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:10:38.218279  328914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1213 09:10:38.230905  328914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 09:10:38.246380  328914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1213 09:10:38.258738  328914 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:10:38.262590  328914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:10:38.273403  328914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:10:38.356748  328914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:10:38.396090  328914 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270 for IP: 192.168.103.2
	I1213 09:10:38.396113  328914 certs.go:195] generating shared ca certs ...
	I1213 09:10:38.396129  328914 certs.go:227] acquiring lock for ca certs: {Name:mk80892d6e61f0acf7b97550b52b3341a040901c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:10:38.396279  328914 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key
	I1213 09:10:38.396332  328914 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key
	I1213 09:10:38.396342  328914 certs.go:257] generating profile certs ...
	I1213 09:10:38.396392  328914 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/client.key
	I1213 09:10:38.396412  328914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/client.crt with IP's: []
	I1213 09:10:38.478682  328914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/client.crt ...
	I1213 09:10:38.478711  328914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/client.crt: {Name:mkdfb5c87fe54d8031064e0f6eaf3ac18fdad0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:10:38.478889  328914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/client.key ...
	I1213 09:10:38.478903  328914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/client.key: {Name:mkce9195b9079d2380055c6d305ede767f270502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:10:38.479023  328914 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/apiserver.key.371ad0ca
	I1213 09:10:38.479048  328914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/apiserver.crt.371ad0ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1213 09:10:38.517150  328914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/apiserver.crt.371ad0ca ...
	I1213 09:10:38.517174  328914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/apiserver.crt.371ad0ca: {Name:mk19590d06ec0bda04f1b158fc775b99b3036f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:10:38.517352  328914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/apiserver.key.371ad0ca ...
	I1213 09:10:38.517368  328914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/apiserver.key.371ad0ca: {Name:mk8616dc8e1defe971790527eb7926d72168d179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:10:38.517468  328914 certs.go:382] copying /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/apiserver.crt.371ad0ca -> /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/apiserver.crt
	I1213 09:10:38.517602  328914 certs.go:386] copying /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/apiserver.key.371ad0ca -> /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/apiserver.key
	I1213 09:10:38.517695  328914 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/proxy-client.key
	I1213 09:10:38.517716  328914 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/proxy-client.crt with IP's: []
	I1213 09:10:38.554347  328914 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/proxy-client.crt ...
	I1213 09:10:38.554389  328914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/proxy-client.crt: {Name:mk7817c6948615171ab2dda8b44df04ffe864ad9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:10:38.554592  328914 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/proxy-client.key ...
	I1213 09:10:38.554609  328914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/proxy-client.key: {Name:mkfccb5ba70270204b5b340e10b5dfc3d58dbc3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:10:38.554842  328914 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem (1338 bytes)
	W1213 09:10:38.554888  328914 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303_empty.pem, impossibly tiny 0 bytes
	I1213 09:10:38.554904  328914 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:10:38.554941  328914 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:10:38.554989  328914 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:10:38.555024  328914 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem (1675 bytes)
	I1213 09:10:38.555081  328914 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:10:38.555688  328914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:10:38.575281  328914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:10:38.593247  328914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:10:38.611179  328914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:10:38.630354  328914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1213 09:10:38.648210  328914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 09:10:38.665498  328914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:10:38.683310  328914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 09:10:38.702242  328914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /usr/share/ca-certificates/93032.pem (1708 bytes)
	I1213 09:10:38.722093  328914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:10:38.739958  328914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem --> /usr/share/ca-certificates/9303.pem (1338 bytes)
	I1213 09:10:38.757508  328914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:10:38.770454  328914 ssh_runner.go:195] Run: openssl version
	I1213 09:10:38.776646  328914 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/93032.pem
	I1213 09:10:38.784064  328914 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/93032.pem /etc/ssl/certs/93032.pem
	I1213 09:10:38.791909  328914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93032.pem
	I1213 09:10:38.795631  328914 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:37 /usr/share/ca-certificates/93032.pem
	I1213 09:10:38.795687  328914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93032.pem
	I1213 09:10:38.833978  328914 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:10:38.841586  328914 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/93032.pem /etc/ssl/certs/3ec20f2e.0
	I1213 09:10:38.849620  328914 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:10:38.857278  328914 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:10:38.865058  328914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:10:38.868804  328914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:10:38.868864  328914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:10:38.905959  328914 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:10:38.915095  328914 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1213 09:10:38.923980  328914 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9303.pem
	I1213 09:10:38.933286  328914 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9303.pem /etc/ssl/certs/9303.pem
	I1213 09:10:38.946812  328914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9303.pem
	I1213 09:10:38.951358  328914 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:37 /usr/share/ca-certificates/9303.pem
	I1213 09:10:38.951413  328914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9303.pem
	I1213 09:10:38.990647  328914 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:10:39.001375  328914 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9303.pem /etc/ssl/certs/51391683.0
	I1213 09:10:39.012316  328914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:10:39.018016  328914 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 09:10:39.018077  328914 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-361270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-361270 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:10:39.018179  328914 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 09:10:39.018264  328914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:10:39.058969  328914 cri.go:89] found id: ""
	I1213 09:10:39.059041  328914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:10:39.068381  328914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 09:10:39.076665  328914 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1213 09:10:39.076739  328914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 09:10:39.086280  328914 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 09:10:39.086302  328914 kubeadm.go:158] found existing configuration files:
	
	I1213 09:10:39.086349  328914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1213 09:10:39.095625  328914 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 09:10:39.095674  328914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 09:10:39.103110  328914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1213 09:10:39.111222  328914 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 09:10:39.111285  328914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 09:10:39.119481  328914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1213 09:10:39.128852  328914 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 09:10:39.128928  328914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 09:10:39.138129  328914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1213 09:10:39.146236  328914 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 09:10:39.146286  328914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 09:10:39.154727  328914 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 09:10:39.192308  328914 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1213 09:10:39.192371  328914 kubeadm.go:319] [preflight] Running pre-flight checks
	I1213 09:10:39.227895  328914 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1213 09:10:39.227968  328914 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1213 09:10:39.228037  328914 kubeadm.go:319] OS: Linux
	I1213 09:10:39.228125  328914 kubeadm.go:319] CGROUPS_CPU: enabled
	I1213 09:10:39.228203  328914 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1213 09:10:39.228271  328914 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1213 09:10:39.228331  328914 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1213 09:10:39.228394  328914 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1213 09:10:39.228502  328914 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1213 09:10:39.228590  328914 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1213 09:10:39.228666  328914 kubeadm.go:319] CGROUPS_IO: enabled
	I1213 09:10:39.294964  328914 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 09:10:39.295148  328914 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 09:10:39.295307  328914 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 09:10:39.303362  328914 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1213 09:10:37.626347  323665 pod_ready.go:104] pod "coredns-7d764666f9-r95cr" is not "Ready", error: <nil>
	W1213 09:10:39.626939  323665 pod_ready.go:104] pod "coredns-7d764666f9-r95cr" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 13 09:10:27 embed-certs-379362 crio[778]: time="2025-12-13T09:10:27.694233123Z" level=info msg="Starting container: 378859f626d2943f8b1360ff4e6efb0c03ca28084cb9a95bc67e421d09d41887" id=1ac80ef3-4491-462e-a646-a9ed666bfa9a name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:10:27 embed-certs-379362 crio[778]: time="2025-12-13T09:10:27.696467762Z" level=info msg="Started container" PID=1838 containerID=378859f626d2943f8b1360ff4e6efb0c03ca28084cb9a95bc67e421d09d41887 description=kube-system/coredns-66bc5c9577-24vtj/coredns id=1ac80ef3-4491-462e-a646-a9ed666bfa9a name=/runtime.v1.RuntimeService/StartContainer sandboxID=84f3d4d0a36956438cce70afc6d1780a6146bf1a82d5f15de3d7a3cfee28ab71
	Dec 13 09:10:31 embed-certs-379362 crio[778]: time="2025-12-13T09:10:31.025013182Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c7bd7f16-e441-4ab6-8869-ee5015974a6c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:10:31 embed-certs-379362 crio[778]: time="2025-12-13T09:10:31.025101332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:10:31 embed-certs-379362 crio[778]: time="2025-12-13T09:10:31.031510864Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3a1eb526a782ed2003329ca58ec961fe5c44919dbb286cf293e3943d585cff00 UID:94b6473c-a93e-4ff9-a33c-f88515ae0f39 NetNS:/var/run/netns/274f9b58-7b1b-44b2-8d76-bf48863c3758 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00027e628}] Aliases:map[]}"
	Dec 13 09:10:31 embed-certs-379362 crio[778]: time="2025-12-13T09:10:31.03155079Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 13 09:10:31 embed-certs-379362 crio[778]: time="2025-12-13T09:10:31.045116828Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3a1eb526a782ed2003329ca58ec961fe5c44919dbb286cf293e3943d585cff00 UID:94b6473c-a93e-4ff9-a33c-f88515ae0f39 NetNS:/var/run/netns/274f9b58-7b1b-44b2-8d76-bf48863c3758 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00027e628}] Aliases:map[]}"
	Dec 13 09:10:31 embed-certs-379362 crio[778]: time="2025-12-13T09:10:31.045316542Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 13 09:10:31 embed-certs-379362 crio[778]: time="2025-12-13T09:10:31.046294525Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 09:10:31 embed-certs-379362 crio[778]: time="2025-12-13T09:10:31.047536512Z" level=info msg="Ran pod sandbox 3a1eb526a782ed2003329ca58ec961fe5c44919dbb286cf293e3943d585cff00 with infra container: default/busybox/POD" id=c7bd7f16-e441-4ab6-8869-ee5015974a6c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:10:31 embed-certs-379362 crio[778]: time="2025-12-13T09:10:31.048894287Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7c4539bd-d7c0-4896-ae15-14b97c1e5759 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:10:31 embed-certs-379362 crio[778]: time="2025-12-13T09:10:31.049042748Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7c4539bd-d7c0-4896-ae15-14b97c1e5759 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:10:31 embed-certs-379362 crio[778]: time="2025-12-13T09:10:31.049098921Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=7c4539bd-d7c0-4896-ae15-14b97c1e5759 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:10:31 embed-certs-379362 crio[778]: time="2025-12-13T09:10:31.050373562Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1a3c42ec-a265-4a06-9959-3cccb41838f6 name=/runtime.v1.ImageService/PullImage
	Dec 13 09:10:31 embed-certs-379362 crio[778]: time="2025-12-13T09:10:31.055235802Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 13 09:10:34 embed-certs-379362 crio[778]: time="2025-12-13T09:10:34.281195768Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=1a3c42ec-a265-4a06-9959-3cccb41838f6 name=/runtime.v1.ImageService/PullImage
	Dec 13 09:10:34 embed-certs-379362 crio[778]: time="2025-12-13T09:10:34.282102524Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0e1184af-05eb-42d5-ad2a-134c339c7b75 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:10:34 embed-certs-379362 crio[778]: time="2025-12-13T09:10:34.284225627Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=198f9a8a-57c5-47f7-85a4-e329851c86ea name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:10:34 embed-certs-379362 crio[778]: time="2025-12-13T09:10:34.290289003Z" level=info msg="Creating container: default/busybox/busybox" id=b9abf18e-e0bd-4c0d-97ff-14a0cd061f24 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:10:34 embed-certs-379362 crio[778]: time="2025-12-13T09:10:34.290780113Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:10:34 embed-certs-379362 crio[778]: time="2025-12-13T09:10:34.297867171Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:10:34 embed-certs-379362 crio[778]: time="2025-12-13T09:10:34.298566806Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:10:34 embed-certs-379362 crio[778]: time="2025-12-13T09:10:34.342838639Z" level=info msg="Created container 09bfdb289fc47e903c591c5d217d1af08bce375014df5386888caef542502941: default/busybox/busybox" id=b9abf18e-e0bd-4c0d-97ff-14a0cd061f24 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:10:34 embed-certs-379362 crio[778]: time="2025-12-13T09:10:34.344142931Z" level=info msg="Starting container: 09bfdb289fc47e903c591c5d217d1af08bce375014df5386888caef542502941" id=8c10c631-b562-40cb-8c6c-2df343c5aeea name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:10:34 embed-certs-379362 crio[778]: time="2025-12-13T09:10:34.346712471Z" level=info msg="Started container" PID=1910 containerID=09bfdb289fc47e903c591c5d217d1af08bce375014df5386888caef542502941 description=default/busybox/busybox id=8c10c631-b562-40cb-8c6c-2df343c5aeea name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a1eb526a782ed2003329ca58ec961fe5c44919dbb286cf293e3943d585cff00
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	09bfdb289fc47       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   3a1eb526a782e       busybox                                      default
	378859f626d29       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      14 seconds ago      Running             coredns                   0                   84f3d4d0a3695       coredns-66bc5c9577-24vtj                     kube-system
	bba456ff227e1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   80796b0b18f71       storage-provisioner                          kube-system
	4c0279f057199       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      26 seconds ago      Running             kindnet-cni               0                   f2303cf80b8c0       kindnet-4vk4d                                kube-system
	908313d942714       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      26 seconds ago      Running             kube-proxy                0                   e489329889dc6       kube-proxy-zmtpb                             kube-system
	894aa521a045e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      36 seconds ago      Running             etcd                      0                   5c409197da9f4       etcd-embed-certs-379362                      kube-system
	6d82f82f74a3c       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      36 seconds ago      Running             kube-scheduler            0                   658037c7ee1be       kube-scheduler-embed-certs-379362            kube-system
	1a72605719d10       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      36 seconds ago      Running             kube-apiserver            0                   82ec08a02a7f0       kube-apiserver-embed-certs-379362            kube-system
	200d40050467c       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      36 seconds ago      Running             kube-controller-manager   0                   9ccfd1527e4bb       kube-controller-manager-embed-certs-379362   kube-system
	
	
	==> coredns [378859f626d2943f8b1360ff4e6efb0c03ca28084cb9a95bc67e421d09d41887] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54380 - 26478 "HINFO IN 3345321786962649870.5853805090595966024. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.052158079s
	
	
	==> describe nodes <==
	Name:               embed-certs-379362
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-379362
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=embed-certs-379362
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_10_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:10:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-379362
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:10:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:10:40 +0000   Sat, 13 Dec 2025 09:10:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:10:40 +0000   Sat, 13 Dec 2025 09:10:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:10:40 +0000   Sat, 13 Dec 2025 09:10:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:10:40 +0000   Sat, 13 Dec 2025 09:10:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-379362
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                93d464fd-d722-496b-b12c-6011440d8ee6
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-24vtj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-embed-certs-379362                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-4vk4d                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-embed-certs-379362             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-embed-certs-379362    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-zmtpb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-embed-certs-379362             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node embed-certs-379362 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node embed-certs-379362 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)  kubelet          Node embed-certs-379362 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node embed-certs-379362 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node embed-certs-379362 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node embed-certs-379362 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node embed-certs-379362 event: Registered Node embed-certs-379362 in Controller
	  Normal  NodeReady                15s                kubelet          Node embed-certs-379362 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[Dec13 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 92 30 31 19 d5 08 06
	[  +0.000699] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 37 88 97 25 dd 08 06
	[ +15.632521] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[ +20.216508] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 f7 62 38 81 f0 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[  +0.111008] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 c8 f7 8d 1a 5f 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[ +12.586108] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de cc 9f d9 2d 23 08 06
	[  +0.043792] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	[Dec13 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 62 87 e9 6b d3 37 08 06
	[  +0.000424] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	
	
	==> etcd [894aa521a045e1683ec05d5017b6685ebf23e650d0c85d4199dcdc063ef6e104] <==
	{"level":"warn","ts":"2025-12-13T09:10:07.228653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.235630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.242439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.250665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.258242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.265061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.272542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.279368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.293661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.300284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.306828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.313608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.320124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.326962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.334166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.342622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.349875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.359869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.370720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.377564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.393977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.400352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.407176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:07.452660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32846","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T09:10:33.916921Z","caller":"traceutil/trace.go:172","msg":"trace[1648662963] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"110.546832ms","start":"2025-12-13T09:10:33.806345Z","end":"2025-12-13T09:10:33.916892Z","steps":["trace[1648662963] 'process raft request'  (duration: 110.33282ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:10:42 up 53 min,  0 user,  load average: 5.43, 3.70, 2.37
	Linux embed-certs-379362 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4c0279f0571996c9f2002af4b96e4c8b7cd3b4309b326c799d14370cdb80661b] <==
	I1213 09:10:16.571084       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 09:10:16.571431       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 09:10:16.573661       1 main.go:148] setting mtu 1500 for CNI 
	I1213 09:10:16.573702       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 09:10:16.573726       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T09:10:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 09:10:16.870174       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 09:10:16.870242       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 09:10:16.870254       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 09:10:16.871006       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 09:10:17.170634       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 09:10:17.170660       1 metrics.go:72] Registering metrics
	I1213 09:10:17.170722       1 controller.go:711] "Syncing nftables rules"
	I1213 09:10:26.874612       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 09:10:26.874662       1 main.go:301] handling current node
	I1213 09:10:36.870049       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 09:10:36.870086       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1a72605719d109b19f2207d174193e8bdb7d5240ceba3ddd69af1328a1d6d650] <==
	I1213 09:10:07.944302       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1213 09:10:07.944421       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 09:10:07.948281       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1213 09:10:07.948553       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:10:07.954096       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:10:07.954716       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 09:10:07.973921       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:10:08.846734       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1213 09:10:08.850431       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1213 09:10:08.850450       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 09:10:09.324282       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:10:09.361149       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:10:09.452305       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 09:10:09.457939       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1213 09:10:09.458970       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:10:09.463171       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:10:09.860666       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:10:10.499876       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:10:10.509298       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 09:10:10.517678       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 09:10:15.614248       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:10:15.666652       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:10:15.673034       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:10:15.913073       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1213 09:10:40.831803       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:43280: use of closed network connection
	
	
	==> kube-controller-manager [200d40050467c85159c6da95d9be4d8efd0f231cee0dabd2fa9ea3a10a6d447b] <==
	I1213 09:10:14.861055       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1213 09:10:14.861068       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 09:10:14.861093       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 09:10:14.861098       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 09:10:14.861124       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 09:10:14.861136       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 09:10:14.861415       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1213 09:10:14.862443       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 09:10:14.862467       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1213 09:10:14.862514       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 09:10:14.862593       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1213 09:10:14.863938       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 09:10:14.865062       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1213 09:10:14.865109       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1213 09:10:14.865136       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1213 09:10:14.865144       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1213 09:10:14.865149       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 09:10:14.865301       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:10:14.869345       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1213 09:10:14.870545       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:10:14.871319       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-379362" podCIDRs=["10.244.0.0/24"]
	I1213 09:10:14.884450       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1213 09:10:14.892578       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 09:10:14.895813       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:10:29.806827       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [908313d9427140629440051307ce4d9926b61a1c639201b1b5d52d9f91a0754b] <==
	I1213 09:10:16.388078       1 server_linux.go:53] "Using iptables proxy"
	I1213 09:10:16.454772       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:10:16.555256       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:10:16.555393       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 09:10:16.555569       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:10:16.585364       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 09:10:16.585441       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:10:16.594945       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:10:16.598008       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:10:16.598069       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:10:16.601015       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:10:16.601040       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:10:16.601068       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:10:16.601072       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:10:16.601109       1 config.go:200] "Starting service config controller"
	I1213 09:10:16.601173       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:10:16.601398       1 config.go:309] "Starting node config controller"
	I1213 09:10:16.601413       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:10:16.701568       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:10:16.701604       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:10:16.701605       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:10:16.701613       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [6d82f82f74a3c8dd68c8667e63a5ebe49ca93ec49c71323fd3690fd91d9b4b76] <==
	E1213 09:10:08.200696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 09:10:08.200832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 09:10:08.202111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 09:10:08.202165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 09:10:08.202219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1213 09:10:08.202252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 09:10:08.202258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 09:10:08.202294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 09:10:08.202756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 09:10:08.202894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 09:10:08.203217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 09:10:08.203264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 09:10:08.203308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 09:10:08.203379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 09:10:08.203407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 09:10:08.203425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:10:08.203638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 09:10:08.203637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 09:10:08.203742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 09:10:09.082275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 09:10:09.098444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 09:10:09.159338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:10:09.159339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 09:10:09.164591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1213 09:10:09.499569       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 09:10:11 embed-certs-379362 kubelet[1314]: I1213 09:10:11.364478    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-379362" podStartSLOduration=1.36445525 podStartE2EDuration="1.36445525s" podCreationTimestamp="2025-12-13 09:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:10:11.354176102 +0000 UTC m=+1.123971333" watchObservedRunningTime="2025-12-13 09:10:11.36445525 +0000 UTC m=+1.134250481"
	Dec 13 09:10:11 embed-certs-379362 kubelet[1314]: I1213 09:10:11.375042    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-379362" podStartSLOduration=1.375015695 podStartE2EDuration="1.375015695s" podCreationTimestamp="2025-12-13 09:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:10:11.364434328 +0000 UTC m=+1.134229558" watchObservedRunningTime="2025-12-13 09:10:11.375015695 +0000 UTC m=+1.144810926"
	Dec 13 09:10:11 embed-certs-379362 kubelet[1314]: I1213 09:10:11.375195    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-379362" podStartSLOduration=2.375187053 podStartE2EDuration="2.375187053s" podCreationTimestamp="2025-12-13 09:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:10:11.374762067 +0000 UTC m=+1.144557298" watchObservedRunningTime="2025-12-13 09:10:11.375187053 +0000 UTC m=+1.144982281"
	Dec 13 09:10:11 embed-certs-379362 kubelet[1314]: I1213 09:10:11.384353    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-379362" podStartSLOduration=1.384332049 podStartE2EDuration="1.384332049s" podCreationTimestamp="2025-12-13 09:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:10:11.384179442 +0000 UTC m=+1.153974676" watchObservedRunningTime="2025-12-13 09:10:11.384332049 +0000 UTC m=+1.154127280"
	Dec 13 09:10:14 embed-certs-379362 kubelet[1314]: I1213 09:10:14.929334    1314 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 13 09:10:14 embed-certs-379362 kubelet[1314]: I1213 09:10:14.930017    1314 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 13 09:10:15 embed-certs-379362 kubelet[1314]: I1213 09:10:15.945993    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c6bfb114-7843-46f4-8244-db73b00b7e6a-kube-proxy\") pod \"kube-proxy-zmtpb\" (UID: \"c6bfb114-7843-46f4-8244-db73b00b7e6a\") " pod="kube-system/kube-proxy-zmtpb"
	Dec 13 09:10:15 embed-certs-379362 kubelet[1314]: I1213 09:10:15.946035    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6bfb114-7843-46f4-8244-db73b00b7e6a-xtables-lock\") pod \"kube-proxy-zmtpb\" (UID: \"c6bfb114-7843-46f4-8244-db73b00b7e6a\") " pod="kube-system/kube-proxy-zmtpb"
	Dec 13 09:10:15 embed-certs-379362 kubelet[1314]: I1213 09:10:15.946056    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23fa27ce-887f-4910-af8d-74b11ea2df32-lib-modules\") pod \"kindnet-4vk4d\" (UID: \"23fa27ce-887f-4910-af8d-74b11ea2df32\") " pod="kube-system/kindnet-4vk4d"
	Dec 13 09:10:15 embed-certs-379362 kubelet[1314]: I1213 09:10:15.946081    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6bfb114-7843-46f4-8244-db73b00b7e6a-lib-modules\") pod \"kube-proxy-zmtpb\" (UID: \"c6bfb114-7843-46f4-8244-db73b00b7e6a\") " pod="kube-system/kube-proxy-zmtpb"
	Dec 13 09:10:15 embed-certs-379362 kubelet[1314]: I1213 09:10:15.946098    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94zvv\" (UniqueName: \"kubernetes.io/projected/c6bfb114-7843-46f4-8244-db73b00b7e6a-kube-api-access-94zvv\") pod \"kube-proxy-zmtpb\" (UID: \"c6bfb114-7843-46f4-8244-db73b00b7e6a\") " pod="kube-system/kube-proxy-zmtpb"
	Dec 13 09:10:15 embed-certs-379362 kubelet[1314]: I1213 09:10:15.946137    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/23fa27ce-887f-4910-af8d-74b11ea2df32-cni-cfg\") pod \"kindnet-4vk4d\" (UID: \"23fa27ce-887f-4910-af8d-74b11ea2df32\") " pod="kube-system/kindnet-4vk4d"
	Dec 13 09:10:15 embed-certs-379362 kubelet[1314]: I1213 09:10:15.946151    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23fa27ce-887f-4910-af8d-74b11ea2df32-xtables-lock\") pod \"kindnet-4vk4d\" (UID: \"23fa27ce-887f-4910-af8d-74b11ea2df32\") " pod="kube-system/kindnet-4vk4d"
	Dec 13 09:10:15 embed-certs-379362 kubelet[1314]: I1213 09:10:15.946173    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lrqb\" (UniqueName: \"kubernetes.io/projected/23fa27ce-887f-4910-af8d-74b11ea2df32-kube-api-access-4lrqb\") pod \"kindnet-4vk4d\" (UID: \"23fa27ce-887f-4910-af8d-74b11ea2df32\") " pod="kube-system/kindnet-4vk4d"
	Dec 13 09:10:16 embed-certs-379362 kubelet[1314]: I1213 09:10:16.390430    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zmtpb" podStartSLOduration=1.390406303 podStartE2EDuration="1.390406303s" podCreationTimestamp="2025-12-13 09:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:10:16.37264037 +0000 UTC m=+6.142435599" watchObservedRunningTime="2025-12-13 09:10:16.390406303 +0000 UTC m=+6.160201535"
	Dec 13 09:10:18 embed-certs-379362 kubelet[1314]: I1213 09:10:18.754761    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4vk4d" podStartSLOduration=3.754734496 podStartE2EDuration="3.754734496s" podCreationTimestamp="2025-12-13 09:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:10:16.390679976 +0000 UTC m=+6.160475204" watchObservedRunningTime="2025-12-13 09:10:18.754734496 +0000 UTC m=+8.524529724"
	Dec 13 09:10:27 embed-certs-379362 kubelet[1314]: I1213 09:10:27.252623    1314 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 13 09:10:27 embed-certs-379362 kubelet[1314]: I1213 09:10:27.344176    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8986d496-b2cb-429d-80ec-2f326920e440-config-volume\") pod \"coredns-66bc5c9577-24vtj\" (UID: \"8986d496-b2cb-429d-80ec-2f326920e440\") " pod="kube-system/coredns-66bc5c9577-24vtj"
	Dec 13 09:10:27 embed-certs-379362 kubelet[1314]: I1213 09:10:27.344250    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hn4w\" (UniqueName: \"kubernetes.io/projected/8986d496-b2cb-429d-80ec-2f326920e440-kube-api-access-7hn4w\") pod \"coredns-66bc5c9577-24vtj\" (UID: \"8986d496-b2cb-429d-80ec-2f326920e440\") " pod="kube-system/coredns-66bc5c9577-24vtj"
	Dec 13 09:10:27 embed-certs-379362 kubelet[1314]: I1213 09:10:27.344325    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/937cc208-1949-4660-a328-292224786f1b-tmp\") pod \"storage-provisioner\" (UID: \"937cc208-1949-4660-a328-292224786f1b\") " pod="kube-system/storage-provisioner"
	Dec 13 09:10:27 embed-certs-379362 kubelet[1314]: I1213 09:10:27.344351    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl8dq\" (UniqueName: \"kubernetes.io/projected/937cc208-1949-4660-a328-292224786f1b-kube-api-access-gl8dq\") pod \"storage-provisioner\" (UID: \"937cc208-1949-4660-a328-292224786f1b\") " pod="kube-system/storage-provisioner"
	Dec 13 09:10:28 embed-certs-379362 kubelet[1314]: I1213 09:10:28.398917    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.398895254 podStartE2EDuration="12.398895254s" podCreationTimestamp="2025-12-13 09:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:10:28.398526329 +0000 UTC m=+18.168321561" watchObservedRunningTime="2025-12-13 09:10:28.398895254 +0000 UTC m=+18.168690485"
	Dec 13 09:10:28 embed-certs-379362 kubelet[1314]: I1213 09:10:28.410098    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-24vtj" podStartSLOduration=12.410073791 podStartE2EDuration="12.410073791s" podCreationTimestamp="2025-12-13 09:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:10:28.40980336 +0000 UTC m=+18.179598591" watchObservedRunningTime="2025-12-13 09:10:28.410073791 +0000 UTC m=+18.179869022"
	Dec 13 09:10:30 embed-certs-379362 kubelet[1314]: I1213 09:10:30.765423    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stwpf\" (UniqueName: \"kubernetes.io/projected/94b6473c-a93e-4ff9-a33c-f88515ae0f39-kube-api-access-stwpf\") pod \"busybox\" (UID: \"94b6473c-a93e-4ff9-a33c-f88515ae0f39\") " pod="default/busybox"
	Dec 13 09:10:34 embed-certs-379362 kubelet[1314]: I1213 09:10:34.470773    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.236887759 podStartE2EDuration="4.470754934s" podCreationTimestamp="2025-12-13 09:10:30 +0000 UTC" firstStartedPulling="2025-12-13 09:10:31.049506765 +0000 UTC m=+20.819301991" lastFinishedPulling="2025-12-13 09:10:34.283373941 +0000 UTC m=+24.053169166" observedRunningTime="2025-12-13 09:10:34.470328367 +0000 UTC m=+24.240123621" watchObservedRunningTime="2025-12-13 09:10:34.470754934 +0000 UTC m=+24.240550160"
	
	
	==> storage-provisioner [bba456ff227e165def5907e1074382479ac2372fd9e184da0d47ac92f6576108] <==
	I1213 09:10:27.705747       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:10:27.716004       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:10:27.716191       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 09:10:27.719657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:10:27.728162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:10:27.728984       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 09:10:27.729201       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-379362_4ba61746-7f64-45c6-9e83-9a3105cce3f9!
	I1213 09:10:27.729533       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b7f9375-c00d-46e4-bb0f-70ff28c36dd3", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-379362_4ba61746-7f64-45c6-9e83-9a3105cce3f9 became leader
	W1213 09:10:27.731604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:10:27.738663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:10:27.830251       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-379362_4ba61746-7f64-45c6-9e83-9a3105cce3f9!
	W1213 09:10:29.743450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:10:29.749714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:10:31.753146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:10:31.799701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:10:33.803571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:10:33.918403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:10:35.922391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:10:35.927002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:10:37.930858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:10:37.934692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:10:39.938099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:10:39.943109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:10:41.946635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:10:41.951244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-379362 -n embed-certs-379362
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-379362 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-291522 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-291522 --alsologtostderr -v=1: exit status 80 (1.976786977s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-291522 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:11:16.881012  336425 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:16.881674  336425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:16.881688  336425 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:16.881695  336425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:16.882150  336425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:11:16.882856  336425 out.go:368] Setting JSON to false
	I1213 09:11:16.882905  336425 mustload.go:66] Loading cluster: no-preload-291522
	I1213 09:11:16.883409  336425 config.go:182] Loaded profile config "no-preload-291522": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:11:16.883961  336425 cli_runner.go:164] Run: docker container inspect no-preload-291522 --format={{.State.Status}}
	I1213 09:11:16.908625  336425 host.go:66] Checking if "no-preload-291522" exists ...
	I1213 09:11:16.908927  336425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:16.969836  336425 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-12-13 09:11:16.958368271 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:16.970431  336425 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-291522 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 09:11:16.972483  336425 out.go:179] * Pausing node no-preload-291522 ... 
	I1213 09:11:16.974267  336425 host.go:66] Checking if "no-preload-291522" exists ...
	I1213 09:11:16.974503  336425 ssh_runner.go:195] Run: systemctl --version
	I1213 09:11:16.974540  336425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-291522
	I1213 09:11:16.992553  336425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/no-preload-291522/id_rsa Username:docker}
	I1213 09:11:17.091863  336425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:17.120296  336425 pause.go:52] kubelet running: true
	I1213 09:11:17.120381  336425 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:11:17.359835  336425 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:11:17.360021  336425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:11:17.460922  336425 cri.go:89] found id: "68c6761f0f0510b2748c8962041babfa051aafcfed48ea5944ca38de9dce19f7"
	I1213 09:11:17.460950  336425 cri.go:89] found id: "5bfd9401264d34d0b0d2eb0000b940d6ff3b6283b164ef83fed5d41fa173d160"
	I1213 09:11:17.460956  336425 cri.go:89] found id: "1db71287c16d63009a5f2de744adf19e79144e6950e9eab448fbbb3f35ae0e18"
	I1213 09:11:17.460961  336425 cri.go:89] found id: "f2ff8aa0f7b65be2df303c832a34fdd6fb1cd31cf904c4955c07b3b3c73b8a8f"
	I1213 09:11:17.460965  336425 cri.go:89] found id: "0cc2dea823087fae8ecd9ac5823e4c2ef2cd22c680ca6865c5debeb27a6c9b96"
	I1213 09:11:17.460970  336425 cri.go:89] found id: "f8d1691eeb23819007271a5f04b8b81699f4e145a11d54fec11f89910cce3eda"
	I1213 09:11:17.460974  336425 cri.go:89] found id: "c1810afee538180d88c84228d084c1882c4e4161efd0b381dfe49512b1daff51"
	I1213 09:11:17.460979  336425 cri.go:89] found id: "595a2d6f50c49e4264151a01e2cf2cd1d109e03af0557b967299dbbc387d9a26"
	I1213 09:11:17.460984  336425 cri.go:89] found id: "c5352b1836776c6b17ea7fec0581ec2ac4de137ad305bf9d95497fdf8f4fb634"
	I1213 09:11:17.460992  336425 cri.go:89] found id: "b568d53cbfbef98fb966be78aa157c961bd12f67f98178b212effe0afc2082ed"
	I1213 09:11:17.460996  336425 cri.go:89] found id: "a6598e1c508f2de6e4501253701b39af6be5786452d68184b46c10c6045bdba5"
	I1213 09:11:17.461011  336425 cri.go:89] found id: ""
	I1213 09:11:17.461126  336425 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:11:17.476213  336425 retry.go:31] will retry after 225.316299ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:17Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:11:17.702688  336425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:17.721946  336425 pause.go:52] kubelet running: false
	I1213 09:11:17.722124  336425 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:11:17.933716  336425 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:11:17.933797  336425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:11:18.018276  336425 cri.go:89] found id: "68c6761f0f0510b2748c8962041babfa051aafcfed48ea5944ca38de9dce19f7"
	I1213 09:11:18.018304  336425 cri.go:89] found id: "5bfd9401264d34d0b0d2eb0000b940d6ff3b6283b164ef83fed5d41fa173d160"
	I1213 09:11:18.018311  336425 cri.go:89] found id: "1db71287c16d63009a5f2de744adf19e79144e6950e9eab448fbbb3f35ae0e18"
	I1213 09:11:18.018318  336425 cri.go:89] found id: "f2ff8aa0f7b65be2df303c832a34fdd6fb1cd31cf904c4955c07b3b3c73b8a8f"
	I1213 09:11:18.018322  336425 cri.go:89] found id: "0cc2dea823087fae8ecd9ac5823e4c2ef2cd22c680ca6865c5debeb27a6c9b96"
	I1213 09:11:18.018328  336425 cri.go:89] found id: "f8d1691eeb23819007271a5f04b8b81699f4e145a11d54fec11f89910cce3eda"
	I1213 09:11:18.018333  336425 cri.go:89] found id: "c1810afee538180d88c84228d084c1882c4e4161efd0b381dfe49512b1daff51"
	I1213 09:11:18.018337  336425 cri.go:89] found id: "595a2d6f50c49e4264151a01e2cf2cd1d109e03af0557b967299dbbc387d9a26"
	I1213 09:11:18.018341  336425 cri.go:89] found id: "c5352b1836776c6b17ea7fec0581ec2ac4de137ad305bf9d95497fdf8f4fb634"
	I1213 09:11:18.018352  336425 cri.go:89] found id: "b568d53cbfbef98fb966be78aa157c961bd12f67f98178b212effe0afc2082ed"
	I1213 09:11:18.018360  336425 cri.go:89] found id: "a6598e1c508f2de6e4501253701b39af6be5786452d68184b46c10c6045bdba5"
	I1213 09:11:18.018365  336425 cri.go:89] found id: ""
	I1213 09:11:18.018408  336425 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:11:18.032596  336425 retry.go:31] will retry after 366.613559ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:18Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:11:18.399810  336425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:18.416438  336425 pause.go:52] kubelet running: false
	I1213 09:11:18.416537  336425 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:11:18.640575  336425 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:11:18.640645  336425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:11:18.740153  336425 cri.go:89] found id: "68c6761f0f0510b2748c8962041babfa051aafcfed48ea5944ca38de9dce19f7"
	I1213 09:11:18.740173  336425 cri.go:89] found id: "5bfd9401264d34d0b0d2eb0000b940d6ff3b6283b164ef83fed5d41fa173d160"
	I1213 09:11:18.740179  336425 cri.go:89] found id: "1db71287c16d63009a5f2de744adf19e79144e6950e9eab448fbbb3f35ae0e18"
	I1213 09:11:18.740184  336425 cri.go:89] found id: "f2ff8aa0f7b65be2df303c832a34fdd6fb1cd31cf904c4955c07b3b3c73b8a8f"
	I1213 09:11:18.740189  336425 cri.go:89] found id: "0cc2dea823087fae8ecd9ac5823e4c2ef2cd22c680ca6865c5debeb27a6c9b96"
	I1213 09:11:18.740194  336425 cri.go:89] found id: "f8d1691eeb23819007271a5f04b8b81699f4e145a11d54fec11f89910cce3eda"
	I1213 09:11:18.740198  336425 cri.go:89] found id: "c1810afee538180d88c84228d084c1882c4e4161efd0b381dfe49512b1daff51"
	I1213 09:11:18.740202  336425 cri.go:89] found id: "595a2d6f50c49e4264151a01e2cf2cd1d109e03af0557b967299dbbc387d9a26"
	I1213 09:11:18.740206  336425 cri.go:89] found id: "c5352b1836776c6b17ea7fec0581ec2ac4de137ad305bf9d95497fdf8f4fb634"
	I1213 09:11:18.740215  336425 cri.go:89] found id: "b568d53cbfbef98fb966be78aa157c961bd12f67f98178b212effe0afc2082ed"
	I1213 09:11:18.740219  336425 cri.go:89] found id: "a6598e1c508f2de6e4501253701b39af6be5786452d68184b46c10c6045bdba5"
	I1213 09:11:18.740224  336425 cri.go:89] found id: ""
	I1213 09:11:18.740264  336425 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:11:18.766619  336425 out.go:203] 
	W1213 09:11:18.769194  336425 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 09:11:18.769227  336425 out.go:285] * 
	* 
	W1213 09:11:18.775475  336425 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:11:18.776954  336425 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-291522 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-291522
helpers_test.go:244: (dbg) docker inspect no-preload-291522:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f",
	        "Created": "2025-12-13T09:09:03.465040092Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 323873,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:10:15.943549827Z",
	            "FinishedAt": "2025-12-13T09:10:15.000269568Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f/hostname",
	        "HostsPath": "/var/lib/docker/containers/8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f/hosts",
	        "LogPath": "/var/lib/docker/containers/8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f/8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f-json.log",
	        "Name": "/no-preload-291522",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-291522:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-291522",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f",
	                "LowerDir": "/var/lib/docker/overlay2/403e75f5519deacbc31ed8646ccb8a414adf3a8394c0ecafea0ca0f3aa14db2e-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/403e75f5519deacbc31ed8646ccb8a414adf3a8394c0ecafea0ca0f3aa14db2e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/403e75f5519deacbc31ed8646ccb8a414adf3a8394c0ecafea0ca0f3aa14db2e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/403e75f5519deacbc31ed8646ccb8a414adf3a8394c0ecafea0ca0f3aa14db2e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-291522",
	                "Source": "/var/lib/docker/volumes/no-preload-291522/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-291522",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-291522",
	                "name.minikube.sigs.k8s.io": "no-preload-291522",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a4acb938dbe9331a1b9d3581af27aa00a54990489a55f0065c9c9761bdb97041",
	            "SandboxKey": "/var/run/docker/netns/a4acb938dbe9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-291522": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dfb09ffcf6775e2f48749a68987a54ab42f782835937d81dea2e3e4a543a7d9d",
	                    "EndpointID": "820ac883dc28388a97c372cfa41f8c731307b095b16bd6baefb32ac552763df2",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "2a:a4:b7:94:f1:a5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-291522",
	                        "8646883e9b39"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-291522 -n no-preload-291522
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-291522 -n no-preload-291522: exit status 2 (363.786202ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-291522 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-291522 logs -n 25: (1.468873688s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-833990 sudo containerd config dump                                                                                                                                                                                                  │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo crio config                                                                                                                                                                                                             │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ delete  │ -p bridge-833990                                                                                                                                                                                                                              │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-291522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-234538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ stop    │ -p no-preload-291522 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:10 UTC │
	│ stop    │ -p old-k8s-version-234538 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ addons  │ enable dashboard -p no-preload-291522 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p no-preload-291522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-234538 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p old-k8s-version-234538 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p kubernetes-upgrade-814560                                                                                                                                                                                                                  │ kubernetes-upgrade-814560    │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ delete  │ -p disable-driver-mounts-779931                                                                                                                                                                                                               │ disable-driver-mounts-779931 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable metrics-server -p embed-certs-379362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │                     │
	│ stop    │ -p embed-certs-379362 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p embed-certs-379362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ image   │ no-preload-291522 image list --format=json                                                                                                                                                                                                    │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p no-preload-291522 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-361270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:11:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:11:01.859652  333890 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:01.859763  333890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:01.859768  333890 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:01.859780  333890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:01.860007  333890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:11:01.860461  333890 out.go:368] Setting JSON to false
	I1213 09:11:01.861836  333890 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3214,"bootTime":1765613848,"procs":357,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:11:01.861905  333890 start.go:143] virtualization: kvm guest
	I1213 09:11:01.863731  333890 out.go:179] * [embed-certs-379362] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:11:01.865249  333890 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:11:01.865281  333890 notify.go:221] Checking for updates...
	I1213 09:11:01.867359  333890 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:11:01.868519  333890 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:01.869842  333890 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:11:01.871012  333890 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:11:01.872143  333890 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:11:01.873683  333890 config.go:182] Loaded profile config "embed-certs-379362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:01.874233  333890 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:11:01.901548  333890 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 09:11:01.901656  333890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:01.959403  333890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 09:11:01.949301411 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:01.959565  333890 docker.go:319] overlay module found
	I1213 09:11:01.961826  333890 out.go:179] * Using the docker driver based on existing profile
	W1213 09:10:57.872528  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	W1213 09:10:59.873309  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	I1213 09:11:01.962862  333890 start.go:309] selected driver: docker
	I1213 09:11:01.962874  333890 start.go:927] validating driver "docker" against &{Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:01.962966  333890 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:11:01.963566  333890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:02.021259  333890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 09:11:02.010959916 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:02.021565  333890 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:11:02.021623  333890 cni.go:84] Creating CNI manager for ""
	I1213 09:11:02.021676  333890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:02.021713  333890 start.go:353] cluster config:
	{Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:02.023438  333890 out.go:179] * Starting "embed-certs-379362" primary control-plane node in "embed-certs-379362" cluster
	I1213 09:11:02.024571  333890 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 09:11:02.025856  333890 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:11:02.026959  333890 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:11:02.026992  333890 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 09:11:02.027007  333890 cache.go:65] Caching tarball of preloaded images
	I1213 09:11:02.027033  333890 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:11:02.027086  333890 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:11:02.027100  333890 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 09:11:02.027214  333890 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/config.json ...
	I1213 09:11:02.048858  333890 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:11:02.048877  333890 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:11:02.048892  333890 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:11:02.048922  333890 start.go:360] acquireMachinesLock for embed-certs-379362: {Name:mk2ae32cc4beadbba6a2e4810e36036ee6a949ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:11:02.048994  333890 start.go:364] duration metric: took 42.67µs to acquireMachinesLock for "embed-certs-379362"
	I1213 09:11:02.049011  333890 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:11:02.049016  333890 fix.go:54] fixHost starting: 
	I1213 09:11:02.049233  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:02.068302  333890 fix.go:112] recreateIfNeeded on embed-certs-379362: state=Stopped err=<nil>
	W1213 09:11:02.068327  333890 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 09:10:59.583124  328914 node_ready.go:57] node "default-k8s-diff-port-361270" has "Ready":"False" status (will retry)
	W1213 09:11:02.082475  328914 node_ready.go:57] node "default-k8s-diff-port-361270" has "Ready":"False" status (will retry)
	W1213 09:11:02.629196  323665 pod_ready.go:104] pod "coredns-7d764666f9-r95cr" is not "Ready", error: <nil>
	I1213 09:11:03.625367  323665 pod_ready.go:94] pod "coredns-7d764666f9-r95cr" is "Ready"
	I1213 09:11:03.625394  323665 pod_ready.go:86] duration metric: took 37.505010805s for pod "coredns-7d764666f9-r95cr" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.628034  323665 pod_ready.go:83] waiting for pod "etcd-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.631736  323665 pod_ready.go:94] pod "etcd-no-preload-291522" is "Ready"
	I1213 09:11:03.631760  323665 pod_ready.go:86] duration metric: took 3.705789ms for pod "etcd-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.633687  323665 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.637223  323665 pod_ready.go:94] pod "kube-apiserver-no-preload-291522" is "Ready"
	I1213 09:11:03.637246  323665 pod_ready.go:86] duration metric: took 3.541562ms for pod "kube-apiserver-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.638918  323665 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.823946  323665 pod_ready.go:94] pod "kube-controller-manager-no-preload-291522" is "Ready"
	I1213 09:11:03.823973  323665 pod_ready.go:86] duration metric: took 185.03756ms for pod "kube-controller-manager-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:04.024005  323665 pod_ready.go:83] waiting for pod "kube-proxy-ktgbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:04.424202  323665 pod_ready.go:94] pod "kube-proxy-ktgbz" is "Ready"
	I1213 09:11:04.424226  323665 pod_ready.go:86] duration metric: took 400.196554ms for pod "kube-proxy-ktgbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:04.624268  323665 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:05.023621  323665 pod_ready.go:94] pod "kube-scheduler-no-preload-291522" is "Ready"
	I1213 09:11:05.023647  323665 pod_ready.go:86] duration metric: took 399.354065ms for pod "kube-scheduler-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:05.023659  323665 pod_ready.go:40] duration metric: took 38.976009117s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:05.066541  323665 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 09:11:05.068302  323665 out.go:179] * Done! kubectl is now configured to use "no-preload-291522" cluster and "default" namespace by default
	I1213 09:11:02.070162  333890 out.go:252] * Restarting existing docker container for "embed-certs-379362" ...
	I1213 09:11:02.070221  333890 cli_runner.go:164] Run: docker start embed-certs-379362
	I1213 09:11:02.321118  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:02.339633  333890 kic.go:430] container "embed-certs-379362" state is running.
	I1213 09:11:02.340097  333890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-379362
	I1213 09:11:02.359827  333890 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/config.json ...
	I1213 09:11:02.360100  333890 machine.go:94] provisionDockerMachine start ...
	I1213 09:11:02.360192  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:02.380390  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:02.380635  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:02.380649  333890 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:11:02.381372  333890 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45890->127.0.0.1:33123: read: connection reset by peer
	I1213 09:11:05.518562  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-379362
	
	I1213 09:11:05.518593  333890 ubuntu.go:182] provisioning hostname "embed-certs-379362"
	I1213 09:11:05.518644  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:05.537736  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:05.538011  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:05.538026  333890 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-379362 && echo "embed-certs-379362" | sudo tee /etc/hostname
	I1213 09:11:05.683114  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-379362
	
	I1213 09:11:05.683217  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:05.702249  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:05.702628  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:05.702658  333890 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-379362' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-379362/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-379362' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:11:05.839172  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:11:05.839203  333890 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-5776/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-5776/.minikube}
	I1213 09:11:05.839221  333890 ubuntu.go:190] setting up certificates
	I1213 09:11:05.839232  333890 provision.go:84] configureAuth start
	I1213 09:11:05.839277  333890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-379362
	I1213 09:11:05.857894  333890 provision.go:143] copyHostCerts
	I1213 09:11:05.857989  333890 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem, removing ...
	I1213 09:11:05.858008  333890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem
	I1213 09:11:05.858077  333890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem (1675 bytes)
	I1213 09:11:05.858209  333890 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem, removing ...
	I1213 09:11:05.858219  333890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem
	I1213 09:11:05.858255  333890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem (1082 bytes)
	I1213 09:11:05.858308  333890 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem, removing ...
	I1213 09:11:05.858315  333890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem
	I1213 09:11:05.858338  333890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem (1123 bytes)
	I1213 09:11:05.858384  333890 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem org=jenkins.embed-certs-379362 san=[127.0.0.1 192.168.85.2 embed-certs-379362 localhost minikube]
	I1213 09:11:05.995748  333890 provision.go:177] copyRemoteCerts
	I1213 09:11:05.995808  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:11:05.995841  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.014933  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.113890  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:11:06.131828  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1213 09:11:06.149744  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:11:06.167004  333890 provision.go:87] duration metric: took 327.760831ms to configureAuth
	I1213 09:11:06.167034  333890 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:11:06.167248  333890 config.go:182] Loaded profile config "embed-certs-379362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:06.167371  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.186434  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:06.186700  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:06.186718  333890 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 09:11:06.519456  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 09:11:06.519500  333890 machine.go:97] duration metric: took 4.159363834s to provisionDockerMachine
	I1213 09:11:06.519515  333890 start.go:293] postStartSetup for "embed-certs-379362" (driver="docker")
	I1213 09:11:06.519528  333890 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:11:06.519593  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:11:06.519656  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.538380  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.634842  333890 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:11:06.638452  333890 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:11:06.638473  333890 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:11:06.638495  333890 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/addons for local assets ...
	I1213 09:11:06.638554  333890 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/files for local assets ...
	I1213 09:11:06.638653  333890 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem -> 93032.pem in /etc/ssl/certs
	I1213 09:11:06.638763  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:11:06.646671  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:11:06.664174  333890 start.go:296] duration metric: took 144.644973ms for postStartSetup
	I1213 09:11:06.664268  333890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:11:06.664305  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.683615  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.779502  333890 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:11:06.785404  333890 fix.go:56] duration metric: took 4.736380482s for fixHost
	I1213 09:11:06.785434  333890 start.go:83] releasing machines lock for "embed-certs-379362", held for 4.736428362s
	I1213 09:11:06.785524  333890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-379362
	I1213 09:11:06.808003  333890 ssh_runner.go:195] Run: cat /version.json
	I1213 09:11:06.808061  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.808078  333890 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:11:06.808172  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.833412  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.833605  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	W1213 09:11:02.373908  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	W1213 09:11:04.872547  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	W1213 09:11:06.873449  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	I1213 09:11:06.984735  333890 ssh_runner.go:195] Run: systemctl --version
	I1213 09:11:06.991583  333890 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 09:11:07.026938  333890 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:11:07.031772  333890 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:11:07.031840  333890 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:11:07.039992  333890 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 09:11:07.040013  333890 start.go:496] detecting cgroup driver to use...
	I1213 09:11:07.040046  333890 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 09:11:07.040090  333890 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:11:07.054785  333890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:11:07.068014  333890 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:11:07.068059  333890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:11:07.083003  333890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:11:07.096366  333890 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:11:07.183847  333890 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:11:07.269721  333890 docker.go:234] disabling docker service ...
	I1213 09:11:07.269771  333890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:11:07.285161  333890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:11:07.297389  333890 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:11:07.384882  333890 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:11:07.467142  333890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:11:07.481367  333890 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:11:07.495794  333890 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 09:11:07.495842  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.505016  333890 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 09:11:07.505072  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.514873  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.523864  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.532764  333890 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:11:07.541036  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.549898  333890 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.558670  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.568189  333890 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:11:07.575855  333890 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:11:07.582903  333890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:07.670568  333890 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 09:11:07.843644  333890 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 09:11:07.843715  333890 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 09:11:07.848433  333890 start.go:564] Will wait 60s for crictl version
	I1213 09:11:07.848528  333890 ssh_runner.go:195] Run: which crictl
	I1213 09:11:07.852256  333890 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:11:07.876837  333890 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 09:11:07.876932  333890 ssh_runner.go:195] Run: crio --version
	I1213 09:11:07.904955  333890 ssh_runner.go:195] Run: crio --version
	I1213 09:11:07.933896  333890 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1213 09:11:04.083292  328914 node_ready.go:57] node "default-k8s-diff-port-361270" has "Ready":"False" status (will retry)
	I1213 09:11:06.583127  328914 node_ready.go:49] node "default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:06.583165  328914 node_ready.go:38] duration metric: took 11.003480314s for node "default-k8s-diff-port-361270" to be "Ready" ...
	I1213 09:11:06.583181  328914 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:11:06.583231  328914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:11:06.594500  328914 api_server.go:72] duration metric: took 11.299110433s to wait for apiserver process to appear ...
	I1213 09:11:06.594525  328914 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:11:06.594541  328914 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1213 09:11:06.599417  328914 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1213 09:11:06.600336  328914 api_server.go:141] control plane version: v1.34.2
	I1213 09:11:06.600358  328914 api_server.go:131] duration metric: took 5.826824ms to wait for apiserver health ...
	I1213 09:11:06.600365  328914 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:11:06.603252  328914 system_pods.go:59] 8 kube-system pods found
	I1213 09:11:06.603278  328914 system_pods.go:61] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:06.603283  328914 system_pods.go:61] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:06.603289  328914 system_pods.go:61] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:06.603292  328914 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:06.603296  328914 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:06.603302  328914 system_pods.go:61] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:06.603305  328914 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:06.603310  328914 system_pods.go:61] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:06.603316  328914 system_pods.go:74] duration metric: took 2.9457ms to wait for pod list to return data ...
	I1213 09:11:06.603325  328914 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:11:06.605317  328914 default_sa.go:45] found service account: "default"
	I1213 09:11:06.605334  328914 default_sa.go:55] duration metric: took 2.001953ms for default service account to be created ...
	I1213 09:11:06.605341  328914 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 09:11:06.607611  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:06.607633  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:06.607645  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:06.607651  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:06.607654  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:06.607658  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:06.607662  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:06.607665  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:06.607669  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:06.607685  328914 retry.go:31] will retry after 272.651119ms: missing components: kube-dns
	I1213 09:11:06.885001  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:06.885038  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:06.885046  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:06.885055  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:06.885061  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:06.885067  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:06.885073  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:06.885078  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:06.885087  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:06.885109  328914 retry.go:31] will retry after 389.523569ms: missing components: kube-dns
	I1213 09:11:07.279258  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:07.279287  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:07.279293  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:07.279298  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:07.279302  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:07.279305  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:07.279308  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:07.279317  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:07.279322  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:07.279335  328914 retry.go:31] will retry after 448.006807ms: missing components: kube-dns
	I1213 09:11:07.732933  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:07.732978  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:07.732988  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:07.732997  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:07.733002  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:07.733008  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:07.733012  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:07.733016  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:07.733020  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:07.733031  328914 system_pods.go:126] duration metric: took 1.127684936s to wait for k8s-apps to be running ...
	I1213 09:11:07.733038  328914 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 09:11:07.733082  328914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:07.749643  328914 system_svc.go:56] duration metric: took 16.594824ms WaitForService to wait for kubelet
	I1213 09:11:07.749674  328914 kubeadm.go:587] duration metric: took 12.454300158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:11:07.749698  328914 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:11:07.752080  328914 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:11:07.752112  328914 node_conditions.go:123] node cpu capacity is 8
	I1213 09:11:07.752131  328914 node_conditions.go:105] duration metric: took 2.42792ms to run NodePressure ...
	I1213 09:11:07.752146  328914 start.go:242] waiting for startup goroutines ...
	I1213 09:11:07.752160  328914 start.go:247] waiting for cluster config update ...
	I1213 09:11:07.752173  328914 start.go:256] writing updated cluster config ...
	I1213 09:11:07.752508  328914 ssh_runner.go:195] Run: rm -f paused
	I1213 09:11:07.757523  328914 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:07.761238  328914 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xhjmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.766432  328914 pod_ready.go:94] pod "coredns-66bc5c9577-xhjmn" is "Ready"
	I1213 09:11:07.766458  328914 pod_ready.go:86] duration metric: took 5.192246ms for pod "coredns-66bc5c9577-xhjmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.832062  328914 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.840179  328914 pod_ready.go:94] pod "etcd-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:07.840203  328914 pod_ready.go:86] duration metric: took 8.11705ms for pod "etcd-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.842550  328914 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.846547  328914 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:07.846570  328914 pod_ready.go:86] duration metric: took 3.999501ms for pod "kube-apiserver-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.848547  328914 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.161326  328914 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:08.161349  328914 pod_ready.go:86] duration metric: took 312.780385ms for pod "kube-controller-manager-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.372943  324697 pod_ready.go:94] pod "coredns-5dd5756b68-g66tb" is "Ready"
	I1213 09:11:07.372967  324697 pod_ready.go:86] duration metric: took 39.505999616s for pod "coredns-5dd5756b68-g66tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.375663  324697 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.379892  324697 pod_ready.go:94] pod "etcd-old-k8s-version-234538" is "Ready"
	I1213 09:11:07.379916  324697 pod_ready.go:86] duration metric: took 4.234738ms for pod "etcd-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.382722  324697 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.386579  324697 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-234538" is "Ready"
	I1213 09:11:07.386602  324697 pod_ready.go:86] duration metric: took 3.859665ms for pod "kube-apiserver-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.388935  324697 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.570936  324697 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-234538" is "Ready"
	I1213 09:11:07.570963  324697 pod_ready.go:86] duration metric: took 182.006223ms for pod "kube-controller-manager-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.772324  324697 pod_ready.go:83] waiting for pod "kube-proxy-6bkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.173608  324697 pod_ready.go:94] pod "kube-proxy-6bkvj" is "Ready"
	I1213 09:11:08.173638  324697 pod_ready.go:86] duration metric: took 401.292694ms for pod "kube-proxy-6bkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.372409  324697 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.772063  324697 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-234538" is "Ready"
	I1213 09:11:08.772095  324697 pod_ready.go:86] duration metric: took 399.659792ms for pod "kube-scheduler-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.772110  324697 pod_ready.go:40] duration metric: took 40.909481149s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:08.832194  324697 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1213 09:11:08.834797  324697 out.go:203] 
	W1213 09:11:08.836008  324697 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1213 09:11:08.837190  324697 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1213 09:11:08.838445  324697 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-234538" cluster and "default" namespace by default
	I1213 09:11:07.935243  333890 cli_runner.go:164] Run: docker network inspect embed-certs-379362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:11:07.953455  333890 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 09:11:07.957554  333890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:11:07.968284  333890 kubeadm.go:884] updating cluster {Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:11:07.968419  333890 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:11:07.968476  333890 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:11:08.002674  333890 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:11:08.002700  333890 crio.go:433] Images already preloaded, skipping extraction
	I1213 09:11:08.002756  333890 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:11:08.028193  333890 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:11:08.028216  333890 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:11:08.028225  333890 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1213 09:11:08.028332  333890 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-379362 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:11:08.028403  333890 ssh_runner.go:195] Run: crio config
	I1213 09:11:08.074930  333890 cni.go:84] Creating CNI manager for ""
	I1213 09:11:08.074949  333890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:08.074961  333890 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:11:08.074981  333890 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-379362 NodeName:embed-certs-379362 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:11:08.075100  333890 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-379362"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:11:08.075176  333890 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 09:11:08.083542  333890 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:11:08.083624  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:11:08.091566  333890 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1213 09:11:08.104461  333890 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 09:11:08.117321  333890 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1213 09:11:08.130224  333890 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:11:08.134005  333890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:11:08.144074  333890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:08.224481  333890 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:11:08.245774  333890 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362 for IP: 192.168.85.2
	I1213 09:11:08.245792  333890 certs.go:195] generating shared ca certs ...
	I1213 09:11:08.245810  333890 certs.go:227] acquiring lock for ca certs: {Name:mk80892d6e61f0acf7b97550b52b3341a040901c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:08.245989  333890 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key
	I1213 09:11:08.246048  333890 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key
	I1213 09:11:08.246059  333890 certs.go:257] generating profile certs ...
	I1213 09:11:08.246147  333890 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/client.key
	I1213 09:11:08.246205  333890 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/apiserver.key.814e7b8a
	I1213 09:11:08.246246  333890 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/proxy-client.key
	I1213 09:11:08.246349  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem (1338 bytes)
	W1213 09:11:08.246386  333890 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303_empty.pem, impossibly tiny 0 bytes
	I1213 09:11:08.246398  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:11:08.246422  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:11:08.246445  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:11:08.246474  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem (1675 bytes)
	I1213 09:11:08.246555  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:11:08.247224  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:11:08.265750  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:11:08.284698  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:11:08.304326  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:11:08.329185  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1213 09:11:08.348060  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 09:11:08.365610  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:11:08.383456  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 09:11:08.400955  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /usr/share/ca-certificates/93032.pem (1708 bytes)
	I1213 09:11:08.418539  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:11:08.436393  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem --> /usr/share/ca-certificates/9303.pem (1338 bytes)
	I1213 09:11:08.454266  333890 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:11:08.466744  333890 ssh_runner.go:195] Run: openssl version
	I1213 09:11:08.473100  333890 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.480536  333890 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/93032.pem /etc/ssl/certs/93032.pem
	I1213 09:11:08.488383  333890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.492189  333890 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:37 /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.492239  333890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.529232  333890 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:11:08.537596  333890 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.545251  333890 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:11:08.552715  333890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.556579  333890 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.556629  333890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.600524  333890 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:11:08.608451  333890 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.616267  333890 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9303.pem /etc/ssl/certs/9303.pem
	I1213 09:11:08.624437  333890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.628633  333890 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:37 /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.628687  333890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.663783  333890 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:11:08.672093  333890 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:11:08.676012  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:11:08.714649  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:11:08.753817  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:11:08.802703  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:11:08.851736  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:11:08.921259  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:11:08.977170  333890 kubeadm.go:401] StartCluster: {Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:08.977291  333890 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 09:11:08.977362  333890 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:11:09.015784  333890 cri.go:89] found id: "4bc6623c8d51e745a13ec1bbde3156fa4a6306b57cced07bc50b9433f54b52ab"
	I1213 09:11:09.015811  333890 cri.go:89] found id: "be5f00248e70cd8cdd3aaa3d5a1222e8bf8bbfab76393d6a5892e2e4c34a2a74"
	I1213 09:11:09.015818  333890 cri.go:89] found id: "9f6e183787c3b40e4c300978c57f6aef4eb0fabeae2452bf40c81a0b7a5f096a"
	I1213 09:11:09.015825  333890 cri.go:89] found id: "4aa683e93939933e0c046128e063e112508837dfd7e3b3f413f70d5bccf4c6da"
	I1213 09:11:09.015829  333890 cri.go:89] found id: ""
	I1213 09:11:09.015875  333890 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 09:11:09.030638  333890 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:09Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:11:09.030704  333890 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:11:09.039128  333890 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 09:11:09.039178  333890 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 09:11:09.039248  333890 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 09:11:09.047141  333890 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:11:09.048055  333890 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-379362" does not appear in /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:09.048563  333890 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-5776/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-379362" cluster setting kubeconfig missing "embed-certs-379362" context setting]
	I1213 09:11:09.049221  333890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:09.050957  333890 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 09:11:09.059934  333890 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 09:11:09.059966  333890 kubeadm.go:602] duration metric: took 20.780797ms to restartPrimaryControlPlane
	I1213 09:11:09.059975  333890 kubeadm.go:403] duration metric: took 82.814517ms to StartCluster
	I1213 09:11:09.059992  333890 settings.go:142] acquiring lock: {Name:mkb7db33d439ccf76620dab7051026432361ebb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:09.060056  333890 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:09.062377  333890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:09.062685  333890 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:11:09.062757  333890 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 09:11:09.062848  333890 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-379362"
	I1213 09:11:09.062864  333890 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-379362"
	W1213 09:11:09.062872  333890 addons.go:248] addon storage-provisioner should already be in state true
	I1213 09:11:09.062901  333890 host.go:66] Checking if "embed-certs-379362" exists ...
	I1213 09:11:09.062909  333890 addons.go:70] Setting dashboard=true in profile "embed-certs-379362"
	I1213 09:11:09.062926  333890 addons.go:239] Setting addon dashboard=true in "embed-certs-379362"
	W1213 09:11:09.062935  333890 addons.go:248] addon dashboard should already be in state true
	I1213 09:11:09.062946  333890 config.go:182] Loaded profile config "embed-certs-379362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:09.062959  333890 host.go:66] Checking if "embed-certs-379362" exists ...
	I1213 09:11:09.062995  333890 addons.go:70] Setting default-storageclass=true in profile "embed-certs-379362"
	I1213 09:11:09.063010  333890 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-379362"
	I1213 09:11:09.063289  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.063415  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.063500  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.067611  333890 out.go:179] * Verifying Kubernetes components...
	I1213 09:11:09.069241  333890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:09.089368  333890 addons.go:239] Setting addon default-storageclass=true in "embed-certs-379362"
	W1213 09:11:09.089396  333890 addons.go:248] addon default-storageclass should already be in state true
	I1213 09:11:09.089421  333890 host.go:66] Checking if "embed-certs-379362" exists ...
	I1213 09:11:09.089959  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.091596  333890 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 09:11:09.091621  333890 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:11:09.094004  333890 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:11:09.094022  333890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 09:11:09.094036  333890 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 09:11:08.362204  328914 pod_ready.go:83] waiting for pod "kube-proxy-78nr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.762127  328914 pod_ready.go:94] pod "kube-proxy-78nr2" is "Ready"
	I1213 09:11:08.762159  328914 pod_ready.go:86] duration metric: took 399.931988ms for pod "kube-proxy-78nr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.963595  328914 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:09.362581  328914 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:09.362857  328914 pod_ready.go:86] duration metric: took 399.227137ms for pod "kube-scheduler-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:09.362881  328914 pod_ready.go:40] duration metric: took 1.60532416s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:09.427945  328914 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 09:11:09.429725  328914 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-361270" cluster and "default" namespace by default
	I1213 09:11:09.094083  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:09.094976  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 09:11:09.094990  333890 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 09:11:09.095048  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:09.122479  333890 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 09:11:09.122516  333890 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 09:11:09.122573  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:09.124934  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:09.126649  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:09.157673  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:09.240152  333890 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:11:09.256813  333890 node_ready.go:35] waiting up to 6m0s for node "embed-certs-379362" to be "Ready" ...
	I1213 09:11:09.266223  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 09:11:09.266249  333890 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 09:11:09.266409  333890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:11:09.280359  333890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 09:11:09.282762  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 09:11:09.282784  333890 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 09:11:09.306961  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 09:11:09.307019  333890 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 09:11:09.323015  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 09:11:09.323036  333890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 09:11:09.339143  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 09:11:09.339166  333890 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 09:11:09.367621  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 09:11:09.367646  333890 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 09:11:09.382705  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 09:11:09.382728  333890 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 09:11:09.398185  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 09:11:09.398219  333890 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 09:11:09.414356  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:11:09.414389  333890 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 09:11:09.430652  333890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:11:10.622141  333890 node_ready.go:49] node "embed-certs-379362" is "Ready"
	I1213 09:11:10.622177  333890 node_ready.go:38] duration metric: took 1.365330808s for node "embed-certs-379362" to be "Ready" ...
	I1213 09:11:10.622194  333890 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:11:10.622248  333890 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:11:11.141921  333890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.875483061s)
	I1213 09:11:11.141933  333890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.861538443s)
	I1213 09:11:11.142098  333890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.711411401s)
	I1213 09:11:11.142138  333890 api_server.go:72] duration metric: took 2.079421919s to wait for apiserver process to appear ...
	I1213 09:11:11.142151  333890 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:11:11.142170  333890 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 09:11:11.143945  333890 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-379362 addons enable metrics-server
	
	I1213 09:11:11.149734  333890 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:11:11.149761  333890 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:11:11.155576  333890 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 09:11:11.156748  333890 addons.go:530] duration metric: took 2.094000513s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1213 09:11:11.642554  333890 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 09:11:11.648040  333890 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:11:11.648073  333890 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:11:12.142953  333890 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 09:11:12.147533  333890 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1213 09:11:12.148602  333890 api_server.go:141] control plane version: v1.34.2
	I1213 09:11:12.148630  333890 api_server.go:131] duration metric: took 1.006470603s to wait for apiserver health ...
	I1213 09:11:12.148643  333890 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:11:12.152383  333890 system_pods.go:59] 8 kube-system pods found
	I1213 09:11:12.152411  333890 system_pods.go:61] "coredns-66bc5c9577-24vtj" [8986d496-b2cb-429d-80ec-2f326920e440] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:12.152418  333890 system_pods.go:61] "etcd-embed-certs-379362" [cfdea667-b08a-4d24-b7f4-0fe21dbc5388] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:11:12.152428  333890 system_pods.go:61] "kindnet-4vk4d" [23fa27ce-887f-4910-af8d-74b11ea2df32] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 09:11:12.152449  333890 system_pods.go:61] "kube-apiserver-embed-certs-379362" [24a409bb-590d-4ac2-9246-7dba3fc3f946] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:11:12.152462  333890 system_pods.go:61] "kube-controller-manager-embed-certs-379362" [77968fd1-b384-4df9-86bd-289d910ba778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:11:12.152469  333890 system_pods.go:61] "kube-proxy-zmtpb" [c6bfb114-7843-46f4-8244-db73b00b7e6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 09:11:12.152495  333890 system_pods.go:61] "kube-scheduler-embed-certs-379362" [eb180ea3-0cfe-44f4-a995-7612e63240ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:11:12.152526  333890 system_pods.go:61] "storage-provisioner" [937cc208-1949-4660-a328-292224786f1b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:12.152535  333890 system_pods.go:74] duration metric: took 3.881548ms to wait for pod list to return data ...
	I1213 09:11:12.152549  333890 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:11:12.155530  333890 default_sa.go:45] found service account: "default"
	I1213 09:11:12.155557  333890 default_sa.go:55] duration metric: took 3.001063ms for default service account to be created ...
	I1213 09:11:12.155568  333890 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 09:11:12.158432  333890 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:12.158455  333890 system_pods.go:89] "coredns-66bc5c9577-24vtj" [8986d496-b2cb-429d-80ec-2f326920e440] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:12.158463  333890 system_pods.go:89] "etcd-embed-certs-379362" [cfdea667-b08a-4d24-b7f4-0fe21dbc5388] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:11:12.158470  333890 system_pods.go:89] "kindnet-4vk4d" [23fa27ce-887f-4910-af8d-74b11ea2df32] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 09:11:12.158476  333890 system_pods.go:89] "kube-apiserver-embed-certs-379362" [24a409bb-590d-4ac2-9246-7dba3fc3f946] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:11:12.158520  333890 system_pods.go:89] "kube-controller-manager-embed-certs-379362" [77968fd1-b384-4df9-86bd-289d910ba778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:11:12.158534  333890 system_pods.go:89] "kube-proxy-zmtpb" [c6bfb114-7843-46f4-8244-db73b00b7e6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 09:11:12.158543  333890 system_pods.go:89] "kube-scheduler-embed-certs-379362" [eb180ea3-0cfe-44f4-a995-7612e63240ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:11:12.158551  333890 system_pods.go:89] "storage-provisioner" [937cc208-1949-4660-a328-292224786f1b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:12.158563  333890 system_pods.go:126] duration metric: took 2.988393ms to wait for k8s-apps to be running ...
	I1213 09:11:12.158571  333890 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 09:11:12.158615  333890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:12.172411  333890 system_svc.go:56] duration metric: took 13.834615ms WaitForService to wait for kubelet
	I1213 09:11:12.172438  333890 kubeadm.go:587] duration metric: took 3.109721475s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:11:12.172457  333890 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:11:12.175344  333890 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:11:12.175368  333890 node_conditions.go:123] node cpu capacity is 8
	I1213 09:11:12.175391  333890 node_conditions.go:105] duration metric: took 2.92165ms to run NodePressure ...
	I1213 09:11:12.175405  333890 start.go:242] waiting for startup goroutines ...
	I1213 09:11:12.175422  333890 start.go:247] waiting for cluster config update ...
	I1213 09:11:12.175436  333890 start.go:256] writing updated cluster config ...
	I1213 09:11:12.175704  333890 ssh_runner.go:195] Run: rm -f paused
	I1213 09:11:12.179850  333890 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:12.183357  333890 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-24vtj" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 09:11:14.188818  333890 pod_ready.go:104] pod "coredns-66bc5c9577-24vtj" is not "Ready", error: <nil>
	W1213 09:11:16.189566  333890 pod_ready.go:104] pod "coredns-66bc5c9577-24vtj" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 13 09:10:46 no-preload-291522 crio[564]: time="2025-12-13T09:10:46.921539627Z" level=info msg="Started container" PID=1747 containerID=4fad4fd0ffc0e0648a9a5aab233d764c71da60d3d1171c76e7502bb125041ad9 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn/dashboard-metrics-scraper id=fbe77b26-d117-44a3-b200-0491b88341b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=eaf8f7bf85318983838f277e9d03cce753c63e422473ead6071b9938352043d0
	Dec 13 09:10:47 no-preload-291522 crio[564]: time="2025-12-13T09:10:47.315437881Z" level=info msg="Removing container: d81e38d0922b7684c243e6a5582749e029095a3c94c8830409ecc2fa97a65f9d" id=6a93e2b7-0c56-42fa-8c66-48e7e9f82efd name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:10:47 no-preload-291522 crio[564]: time="2025-12-13T09:10:47.330007828Z" level=info msg="Removed container d81e38d0922b7684c243e6a5582749e029095a3c94c8830409ecc2fa97a65f9d: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn/dashboard-metrics-scraper" id=6a93e2b7-0c56-42fa-8c66-48e7e9f82efd name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.339683907Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7447075e-6322-4d59-904e-2e5790cd56ad name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.340716488Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e116b2bd-a778-462d-a567-da1634ca4f70 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.341796328Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b7939fcc-f5ae-47c0-b794-e7d4a027e644 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.341942103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.347192006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.347401142Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/728d74451a0a6e9dcf33ecedd64eb3350ea0bca0ce55156685cdb035e35a62c4/merged/etc/passwd: no such file or directory"
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.347435094Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/728d74451a0a6e9dcf33ecedd64eb3350ea0bca0ce55156685cdb035e35a62c4/merged/etc/group: no such file or directory"
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.348087069Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.374062045Z" level=info msg="Created container 68c6761f0f0510b2748c8962041babfa051aafcfed48ea5944ca38de9dce19f7: kube-system/storage-provisioner/storage-provisioner" id=b7939fcc-f5ae-47c0-b794-e7d4a027e644 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.374690965Z" level=info msg="Starting container: 68c6761f0f0510b2748c8962041babfa051aafcfed48ea5944ca38de9dce19f7" id=0e28b774-6e31-438d-94d1-6f983ff5a4cc name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.376580285Z" level=info msg="Started container" PID=1761 containerID=68c6761f0f0510b2748c8962041babfa051aafcfed48ea5944ca38de9dce19f7 description=kube-system/storage-provisioner/storage-provisioner id=0e28b774-6e31-438d-94d1-6f983ff5a4cc name=/runtime.v1.RuntimeService/StartContainer sandboxID=5f16d0a7d1efed4699c19dad3f92bcc3524716f994eb3a3577a23fd153c3bde3
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.214111988Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=59e8c1ce-5b99-4dc1-a35e-539137239075 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.215661135Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b778d5bc-b38a-4d2d-8e29-43ecbfae54e7 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.216759204Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn/dashboard-metrics-scraper" id=7aff1f2f-e9bf-4537-b237-9ed30a75e35e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.216905778Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.22413067Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.224600883Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.256794914Z" level=info msg="Created container b568d53cbfbef98fb966be78aa157c961bd12f67f98178b212effe0afc2082ed: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn/dashboard-metrics-scraper" id=7aff1f2f-e9bf-4537-b237-9ed30a75e35e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.257578102Z" level=info msg="Starting container: b568d53cbfbef98fb966be78aa157c961bd12f67f98178b212effe0afc2082ed" id=4ad96453-1845-4f24-b5a6-2896eb190ff2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.259659041Z" level=info msg="Started container" PID=1797 containerID=b568d53cbfbef98fb966be78aa157c961bd12f67f98178b212effe0afc2082ed description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn/dashboard-metrics-scraper id=4ad96453-1845-4f24-b5a6-2896eb190ff2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=eaf8f7bf85318983838f277e9d03cce753c63e422473ead6071b9938352043d0
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.377964463Z" level=info msg="Removing container: 4fad4fd0ffc0e0648a9a5aab233d764c71da60d3d1171c76e7502bb125041ad9" id=475c49f8-96a8-4d4e-8b41-f4b697163de3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.388778783Z" level=info msg="Removed container 4fad4fd0ffc0e0648a9a5aab233d764c71da60d3d1171c76e7502bb125041ad9: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn/dashboard-metrics-scraper" id=475c49f8-96a8-4d4e-8b41-f4b697163de3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b568d53cbfbef       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   eaf8f7bf85318       dashboard-metrics-scraper-867fb5f87b-nkzwn   kubernetes-dashboard
	68c6761f0f051       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   5f16d0a7d1efe       storage-provisioner                          kube-system
	a6598e1c508f2       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   35fe07600471c       kubernetes-dashboard-b84665fb8-zg7qj         kubernetes-dashboard
	5bfd9401264d3       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           54 seconds ago      Running             coredns                     0                   a57609b7fa511       coredns-7d764666f9-r95cr                     kube-system
	b56e8734b820d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   d01a14a101cd1       busybox                                      default
	1db71287c16d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   5f16d0a7d1efe       storage-provisioner                          kube-system
	f2ff8aa0f7b65       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   8863556509a97       kindnet-sm6z6                                kube-system
	0cc2dea823087       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           54 seconds ago      Running             kube-proxy                  0                   e212295a2e1ca       kube-proxy-ktgbz                             kube-system
	f8d1691eeb238       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           57 seconds ago      Running             kube-apiserver              0                   621525de0c5ba       kube-apiserver-no-preload-291522             kube-system
	c1810afee5381       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           57 seconds ago      Running             kube-controller-manager     0                   e03ab6d576b0a       kube-controller-manager-no-preload-291522    kube-system
	595a2d6f50c49       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           57 seconds ago      Running             etcd                        0                   48ad4cb4d9089       etcd-no-preload-291522                       kube-system
	c5352b1836776       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           57 seconds ago      Running             kube-scheduler              0                   ede6d50750099       kube-scheduler-no-preload-291522             kube-system
	
	
	==> coredns [5bfd9401264d34d0b0d2eb0000b940d6ff3b6283b164ef83fed5d41fa173d160] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:55537 - 37493 "HINFO IN 7127739216525855412.8379753089678689580. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01848884s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-291522
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-291522
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=no-preload-291522
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_09_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:09:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-291522
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:11:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:11:15 +0000   Sat, 13 Dec 2025 09:09:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:11:15 +0000   Sat, 13 Dec 2025 09:09:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:11:15 +0000   Sat, 13 Dec 2025 09:09:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:11:15 +0000   Sat, 13 Dec 2025 09:09:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-291522
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                c2de4dd6-9253-460d-81e1-9ad6236c08d3
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-7d764666f9-r95cr                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-no-preload-291522                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-sm6z6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-291522              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-291522     200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-ktgbz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-291522              100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-nkzwn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-zg7qj          0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  109s  node-controller  Node no-preload-291522 event: Registered Node no-preload-291522 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node no-preload-291522 event: Registered Node no-preload-291522 in Controller
	
	
	==> dmesg <==
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[Dec13 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 92 30 31 19 d5 08 06
	[  +0.000699] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 37 88 97 25 dd 08 06
	[ +15.632521] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[ +20.216508] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 f7 62 38 81 f0 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[  +0.111008] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 c8 f7 8d 1a 5f 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[ +12.586108] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de cc 9f d9 2d 23 08 06
	[  +0.043792] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	[Dec13 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 62 87 e9 6b d3 37 08 06
	[  +0.000424] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	
	
	==> etcd [595a2d6f50c49e4264151a01e2cf2cd1d109e03af0557b967299dbbc387d9a26] <==
	{"level":"warn","ts":"2025-12-13T09:10:23.686901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.696746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.708271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.718264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.729842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.742410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.752042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.759165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.781893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.789320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.797941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.804841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.852955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55350","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T09:10:32.392270Z","caller":"traceutil/trace.go:172","msg":"trace[2069723077] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"112.562942ms","start":"2025-12-13T09:10:32.279685Z","end":"2025-12-13T09:10:32.392248Z","steps":["trace[2069723077] 'process raft request'  (duration: 112.419291ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T09:10:32.782926Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"295.475832ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766804244814047 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kubernetes-dashboard/kubernetes-dashboard-nf4g4\" mod_revision:576 > success:<request_put:<key:\"/registry/endpointslices/kubernetes-dashboard/kubernetes-dashboard-nf4g4\" value_size:1283 >> failure:<request_range:<key:\"/registry/endpointslices/kubernetes-dashboard/kubernetes-dashboard-nf4g4\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-13T09:10:32.783043Z","caller":"traceutil/trace.go:172","msg":"trace[187860706] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"386.547288ms","start":"2025-12-13T09:10:32.396478Z","end":"2025-12-13T09:10:32.783025Z","steps":["trace[187860706] 'process raft request'  (duration: 90.459574ms)","trace[187860706] 'compare'  (duration: 295.367597ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T09:10:32.783157Z","caller":"traceutil/trace.go:172","msg":"trace[1466581601] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"386.636187ms","start":"2025-12-13T09:10:32.396508Z","end":"2025-12-13T09:10:32.783144Z","steps":["trace[1466581601] 'process raft request'  (duration: 386.525494ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:10:32.783119Z","caller":"traceutil/trace.go:172","msg":"trace[1711225685] linearizableReadLoop","detail":"{readStateIndex:614; appliedIndex:613; }","duration":"160.325249ms","start":"2025-12-13T09:10:32.622770Z","end":"2025-12-13T09:10:32.783095Z","steps":["trace[1711225685] 'read index received'  (duration: 42.997562ms)","trace[1711225685] 'applied index is now lower than readState.Index'  (duration: 117.325705ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T09:10:32.783224Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T09:10:32.396465Z","time spent":"386.724352ms","remote":"127.0.0.1:54536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":961,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" mod_revision:573 > success:<request_put:<key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" value_size:883 >> failure:<request_range:<key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" > >"}
	{"level":"warn","ts":"2025-12-13T09:10:32.783127Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T09:10:32.396456Z","time spent":"386.628828ms","remote":"127.0.0.1:54762","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1363,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kubernetes-dashboard/kubernetes-dashboard-nf4g4\" mod_revision:576 > success:<request_put:<key:\"/registry/endpointslices/kubernetes-dashboard/kubernetes-dashboard-nf4g4\" value_size:1283 >> failure:<request_range:<key:\"/registry/endpointslices/kubernetes-dashboard/kubernetes-dashboard-nf4g4\" > >"}
	{"level":"info","ts":"2025-12-13T09:10:32.783264Z","caller":"traceutil/trace.go:172","msg":"trace[203550004] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"385.852997ms","start":"2025-12-13T09:10:32.397402Z","end":"2025-12-13T09:10:32.783255Z","steps":["trace[203550004] 'process raft request'  (duration: 385.69971ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T09:10:32.783332Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T09:10:32.397391Z","time spent":"385.895316ms","remote":"127.0.0.1:55192","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3195,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" mod_revision:564 > success:<request_put:<key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" value_size:3114 >> failure:<request_range:<key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" > >"}
	{"level":"warn","ts":"2025-12-13T09:10:32.783380Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.612113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-r95cr\" limit:1 ","response":"range_response_count:1 size:5933"}
	{"level":"info","ts":"2025-12-13T09:10:32.783408Z","caller":"traceutil/trace.go:172","msg":"trace[19255925] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-r95cr; range_end:; response_count:1; response_revision:589; }","duration":"160.64375ms","start":"2025-12-13T09:10:32.622755Z","end":"2025-12-13T09:10:32.783399Z","steps":["trace[19255925] 'agreement among raft nodes before linearized reading'  (duration: 160.456585ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:10:33.646411Z","caller":"traceutil/trace.go:172","msg":"trace[1708146057] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"176.825454ms","start":"2025-12-13T09:10:33.469561Z","end":"2025-12-13T09:10:33.646387Z","steps":["trace[1708146057] 'process raft request'  (duration: 176.701965ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:11:20 up 53 min,  0 user,  load average: 3.93, 3.52, 2.36
	Linux no-preload-291522 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f2ff8aa0f7b65be2df303c832a34fdd6fb1cd31cf904c4955c07b3b3c73b8a8f] <==
	I1213 09:10:25.833200       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 09:10:25.833761       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1213 09:10:25.834042       1 main.go:148] setting mtu 1500 for CNI 
	I1213 09:10:25.834066       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 09:10:25.834096       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T09:10:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 09:10:26.188241       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 09:10:26.188269       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 09:10:26.188282       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 09:10:26.188526       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 09:10:26.588475       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 09:10:26.588583       1 metrics.go:72] Registering metrics
	I1213 09:10:26.588689       1 controller.go:711] "Syncing nftables rules"
	I1213 09:10:36.190611       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 09:10:36.190657       1 main.go:301] handling current node
	I1213 09:10:46.188678       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 09:10:46.188722       1 main.go:301] handling current node
	I1213 09:10:56.188816       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 09:10:56.188857       1 main.go:301] handling current node
	I1213 09:11:06.189350       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 09:11:06.189403       1 main.go:301] handling current node
	I1213 09:11:16.188458       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 09:11:16.188521       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f8d1691eeb23819007271a5f04b8b81699f4e145a11d54fec11f89910cce3eda] <==
	I1213 09:10:24.566361       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 09:10:24.572610       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 09:10:24.572696       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 09:10:24.572850       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 09:10:24.573362       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 09:10:24.564796       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 09:10:24.580623       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:24.582659       1 policy_source.go:248] refreshing policies
	I1213 09:10:24.565273       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 09:10:24.593576       1 cache.go:39] Caches are synced for autoregister controller
	E1213 09:10:24.594653       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 09:10:24.607726       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 09:10:24.654455       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:10:25.115219       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:10:25.158811       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:10:25.191168       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:10:25.201956       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:10:25.218477       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:10:25.292123       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.245.80"}
	I1213 09:10:25.307250       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.133.114"}
	I1213 09:10:25.369210       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1213 09:10:28.064741       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:10:28.309842       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:10:28.361665       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:10:28.361665       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c1810afee538180d88c84228d084c1882c4e4161efd0b381dfe49512b1daff51] <==
	I1213 09:10:27.668641       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.669037       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.669147       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.670510       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.670810       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.670872       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.670896       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671076       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671097       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671148       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671151       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671181       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671204       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671246       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.667899       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671308       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.668129       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.667902       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671181       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.678232       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.679175       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:10:27.769006       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.769026       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 09:10:27.769033       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 09:10:27.779685       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [0cc2dea823087fae8ecd9ac5823e4c2ef2cd22c680ca6865c5debeb27a6c9b96] <==
	I1213 09:10:25.662309       1 server_linux.go:53] "Using iptables proxy"
	I1213 09:10:25.724433       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:10:25.825243       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:25.825279       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1213 09:10:25.825396       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:10:25.848558       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 09:10:25.848645       1 server_linux.go:136] "Using iptables Proxier"
	I1213 09:10:25.854350       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:10:25.854901       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 09:10:25.854987       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:10:25.857238       1 config.go:200] "Starting service config controller"
	I1213 09:10:25.857258       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:10:25.857285       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:10:25.857291       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:10:25.857376       1 config.go:309] "Starting node config controller"
	I1213 09:10:25.857392       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:10:25.857399       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:10:25.857307       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:10:25.858223       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:10:25.958056       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:10:25.958053       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:10:25.958319       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c5352b1836776c6b17ea7fec0581ec2ac4de137ad305bf9d95497fdf8f4fb634] <==
	I1213 09:10:22.975818       1 serving.go:386] Generated self-signed cert in-memory
	W1213 09:10:24.393746       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 09:10:24.393794       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 09:10:24.393808       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 09:10:24.393819       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 09:10:24.480933       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1213 09:10:24.481055       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:10:24.503756       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:10:24.503886       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:10:24.523256       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 09:10:24.523331       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 09:10:24.604634       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 13 09:10:40 no-preload-291522 kubelet[716]: E1213 09:10:40.616112     716 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-291522" containerName="kube-controller-manager"
	Dec 13 09:10:46 no-preload-291522 kubelet[716]: E1213 09:10:46.858228     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" containerName="dashboard-metrics-scraper"
	Dec 13 09:10:46 no-preload-291522 kubelet[716]: I1213 09:10:46.858268     716 scope.go:122] "RemoveContainer" containerID="d81e38d0922b7684c243e6a5582749e029095a3c94c8830409ecc2fa97a65f9d"
	Dec 13 09:10:47 no-preload-291522 kubelet[716]: I1213 09:10:47.314111     716 scope.go:122] "RemoveContainer" containerID="d81e38d0922b7684c243e6a5582749e029095a3c94c8830409ecc2fa97a65f9d"
	Dec 13 09:10:47 no-preload-291522 kubelet[716]: E1213 09:10:47.314299     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" containerName="dashboard-metrics-scraper"
	Dec 13 09:10:47 no-preload-291522 kubelet[716]: I1213 09:10:47.314333     716 scope.go:122] "RemoveContainer" containerID="4fad4fd0ffc0e0648a9a5aab233d764c71da60d3d1171c76e7502bb125041ad9"
	Dec 13 09:10:47 no-preload-291522 kubelet[716]: E1213 09:10:47.314544     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nkzwn_kubernetes-dashboard(4eb6b9cd-253c-45bc-8f7b-08ae4685b374)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" podUID="4eb6b9cd-253c-45bc-8f7b-08ae4685b374"
	Dec 13 09:10:56 no-preload-291522 kubelet[716]: I1213 09:10:56.339252     716 scope.go:122] "RemoveContainer" containerID="1db71287c16d63009a5f2de744adf19e79144e6950e9eab448fbbb3f35ae0e18"
	Dec 13 09:10:56 no-preload-291522 kubelet[716]: E1213 09:10:56.858259     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" containerName="dashboard-metrics-scraper"
	Dec 13 09:10:56 no-preload-291522 kubelet[716]: I1213 09:10:56.858305     716 scope.go:122] "RemoveContainer" containerID="4fad4fd0ffc0e0648a9a5aab233d764c71da60d3d1171c76e7502bb125041ad9"
	Dec 13 09:10:56 no-preload-291522 kubelet[716]: E1213 09:10:56.858542     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nkzwn_kubernetes-dashboard(4eb6b9cd-253c-45bc-8f7b-08ae4685b374)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" podUID="4eb6b9cd-253c-45bc-8f7b-08ae4685b374"
	Dec 13 09:11:03 no-preload-291522 kubelet[716]: E1213 09:11:03.468844     716 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-r95cr" containerName="coredns"
	Dec 13 09:11:10 no-preload-291522 kubelet[716]: E1213 09:11:10.212361     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" containerName="dashboard-metrics-scraper"
	Dec 13 09:11:10 no-preload-291522 kubelet[716]: I1213 09:11:10.212936     716 scope.go:122] "RemoveContainer" containerID="4fad4fd0ffc0e0648a9a5aab233d764c71da60d3d1171c76e7502bb125041ad9"
	Dec 13 09:11:10 no-preload-291522 kubelet[716]: I1213 09:11:10.376614     716 scope.go:122] "RemoveContainer" containerID="4fad4fd0ffc0e0648a9a5aab233d764c71da60d3d1171c76e7502bb125041ad9"
	Dec 13 09:11:10 no-preload-291522 kubelet[716]: E1213 09:11:10.376824     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" containerName="dashboard-metrics-scraper"
	Dec 13 09:11:10 no-preload-291522 kubelet[716]: I1213 09:11:10.376862     716 scope.go:122] "RemoveContainer" containerID="b568d53cbfbef98fb966be78aa157c961bd12f67f98178b212effe0afc2082ed"
	Dec 13 09:11:10 no-preload-291522 kubelet[716]: E1213 09:11:10.377057     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nkzwn_kubernetes-dashboard(4eb6b9cd-253c-45bc-8f7b-08ae4685b374)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" podUID="4eb6b9cd-253c-45bc-8f7b-08ae4685b374"
	Dec 13 09:11:16 no-preload-291522 kubelet[716]: E1213 09:11:16.858605     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" containerName="dashboard-metrics-scraper"
	Dec 13 09:11:16 no-preload-291522 kubelet[716]: I1213 09:11:16.858694     716 scope.go:122] "RemoveContainer" containerID="b568d53cbfbef98fb966be78aa157c961bd12f67f98178b212effe0afc2082ed"
	Dec 13 09:11:16 no-preload-291522 kubelet[716]: E1213 09:11:16.858934     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nkzwn_kubernetes-dashboard(4eb6b9cd-253c-45bc-8f7b-08ae4685b374)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" podUID="4eb6b9cd-253c-45bc-8f7b-08ae4685b374"
	Dec 13 09:11:17 no-preload-291522 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 09:11:17 no-preload-291522 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 09:11:17 no-preload-291522 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:11:17 no-preload-291522 systemd[1]: kubelet.service: Consumed 1.819s CPU time.
	
	
	==> kubernetes-dashboard [a6598e1c508f2de6e4501253701b39af6be5786452d68184b46c10c6045bdba5] <==
	2025/12/13 09:10:31 Starting overwatch
	2025/12/13 09:10:31 Using namespace: kubernetes-dashboard
	2025/12/13 09:10:31 Using in-cluster config to connect to apiserver
	2025/12/13 09:10:31 Using secret token for csrf signing
	2025/12/13 09:10:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 09:10:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 09:10:31 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/13 09:10:31 Generating JWE encryption key
	2025/12/13 09:10:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 09:10:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 09:10:32 Initializing JWE encryption key from synchronized object
	2025/12/13 09:10:32 Creating in-cluster Sidecar client
	2025/12/13 09:10:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 09:10:32 Serving insecurely on HTTP port: 9090
	2025/12/13 09:11:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1db71287c16d63009a5f2de744adf19e79144e6950e9eab448fbbb3f35ae0e18] <==
	I1213 09:10:25.625832       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 09:10:55.628122       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [68c6761f0f0510b2748c8962041babfa051aafcfed48ea5944ca38de9dce19f7] <==
	I1213 09:10:56.388803       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:10:56.397014       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:10:56.397065       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 09:10:56.399091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:10:59.854574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:04.114788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:07.712924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:10.766418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:13.789062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:13.794560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:11:13.794708       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 09:11:13.794772       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7f21198f-5ecf-4114-b32b-88a1a9ef30f7", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-291522_c81c3ad3-9765-4b46-af33-9445c1408eea became leader
	I1213 09:11:13.794899       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-291522_c81c3ad3-9765-4b46-af33-9445c1408eea!
	W1213 09:11:13.796743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:13.799606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:11:13.895198       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-291522_c81c3ad3-9765-4b46-af33-9445c1408eea!
	W1213 09:11:15.803376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:15.807934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:17.812296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:17.817067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:19.820363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:19.825110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-291522 -n no-preload-291522
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-291522 -n no-preload-291522: exit status 2 (381.881607ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-291522 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-291522
helpers_test.go:244: (dbg) docker inspect no-preload-291522:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f",
	        "Created": "2025-12-13T09:09:03.465040092Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 323873,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:10:15.943549827Z",
	            "FinishedAt": "2025-12-13T09:10:15.000269568Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f/hostname",
	        "HostsPath": "/var/lib/docker/containers/8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f/hosts",
	        "LogPath": "/var/lib/docker/containers/8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f/8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f-json.log",
	        "Name": "/no-preload-291522",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-291522:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-291522",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8646883e9b39a53874e525b0883dcba3471f8345015be0aaf8cea8ff11333a5f",
	                "LowerDir": "/var/lib/docker/overlay2/403e75f5519deacbc31ed8646ccb8a414adf3a8394c0ecafea0ca0f3aa14db2e-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/403e75f5519deacbc31ed8646ccb8a414adf3a8394c0ecafea0ca0f3aa14db2e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/403e75f5519deacbc31ed8646ccb8a414adf3a8394c0ecafea0ca0f3aa14db2e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/403e75f5519deacbc31ed8646ccb8a414adf3a8394c0ecafea0ca0f3aa14db2e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-291522",
	                "Source": "/var/lib/docker/volumes/no-preload-291522/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-291522",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-291522",
	                "name.minikube.sigs.k8s.io": "no-preload-291522",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a4acb938dbe9331a1b9d3581af27aa00a54990489a55f0065c9c9761bdb97041",
	            "SandboxKey": "/var/run/docker/netns/a4acb938dbe9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-291522": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dfb09ffcf6775e2f48749a68987a54ab42f782835937d81dea2e3e4a543a7d9d",
	                    "EndpointID": "820ac883dc28388a97c372cfa41f8c731307b095b16bd6baefb32ac552763df2",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "2a:a4:b7:94:f1:a5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-291522",
	                        "8646883e9b39"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-291522 -n no-preload-291522
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-291522 -n no-preload-291522: exit status 2 (336.933564ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-291522 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-291522 logs -n 25: (1.159199354s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-833990 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo crio config                                                                                                                                                                                                             │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ delete  │ -p bridge-833990                                                                                                                                                                                                                              │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-291522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-234538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ stop    │ -p no-preload-291522 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:10 UTC │
	│ stop    │ -p old-k8s-version-234538 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ addons  │ enable dashboard -p no-preload-291522 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p no-preload-291522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-234538 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p old-k8s-version-234538 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p kubernetes-upgrade-814560                                                                                                                                                                                                                  │ kubernetes-upgrade-814560    │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ delete  │ -p disable-driver-mounts-779931                                                                                                                                                                                                               │ disable-driver-mounts-779931 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable metrics-server -p embed-certs-379362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │                     │
	│ stop    │ -p embed-certs-379362 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p embed-certs-379362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ image   │ no-preload-291522 image list --format=json                                                                                                                                                                                                    │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p no-preload-291522 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-361270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ image   │ old-k8s-version-234538 image list --format=json                                                                                                                                                                                               │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ stop    │ -p default-k8s-diff-port-361270 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ pause   │ -p old-k8s-version-234538 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:11:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:11:01.859652  333890 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:01.859763  333890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:01.859768  333890 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:01.859780  333890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:01.860007  333890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:11:01.860461  333890 out.go:368] Setting JSON to false
	I1213 09:11:01.861836  333890 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3214,"bootTime":1765613848,"procs":357,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:11:01.861905  333890 start.go:143] virtualization: kvm guest
	I1213 09:11:01.863731  333890 out.go:179] * [embed-certs-379362] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:11:01.865249  333890 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:11:01.865281  333890 notify.go:221] Checking for updates...
	I1213 09:11:01.867359  333890 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:11:01.868519  333890 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:01.869842  333890 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:11:01.871012  333890 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:11:01.872143  333890 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:11:01.873683  333890 config.go:182] Loaded profile config "embed-certs-379362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:01.874233  333890 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:11:01.901548  333890 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 09:11:01.901656  333890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:01.959403  333890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 09:11:01.949301411 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:01.959565  333890 docker.go:319] overlay module found
	I1213 09:11:01.961826  333890 out.go:179] * Using the docker driver based on existing profile
	W1213 09:10:57.872528  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	W1213 09:10:59.873309  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	I1213 09:11:01.962862  333890 start.go:309] selected driver: docker
	I1213 09:11:01.962874  333890 start.go:927] validating driver "docker" against &{Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:01.962966  333890 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:11:01.963566  333890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:02.021259  333890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 09:11:02.010959916 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:02.021565  333890 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:11:02.021623  333890 cni.go:84] Creating CNI manager for ""
	I1213 09:11:02.021676  333890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:02.021713  333890 start.go:353] cluster config:
	{Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:02.023438  333890 out.go:179] * Starting "embed-certs-379362" primary control-plane node in "embed-certs-379362" cluster
	I1213 09:11:02.024571  333890 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 09:11:02.025856  333890 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:11:02.026959  333890 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:11:02.026992  333890 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 09:11:02.027007  333890 cache.go:65] Caching tarball of preloaded images
	I1213 09:11:02.027033  333890 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:11:02.027086  333890 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:11:02.027100  333890 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 09:11:02.027214  333890 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/config.json ...
	I1213 09:11:02.048858  333890 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:11:02.048877  333890 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:11:02.048892  333890 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:11:02.048922  333890 start.go:360] acquireMachinesLock for embed-certs-379362: {Name:mk2ae32cc4beadbba6a2e4810e36036ee6a949ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:11:02.048994  333890 start.go:364] duration metric: took 42.67µs to acquireMachinesLock for "embed-certs-379362"
	I1213 09:11:02.049011  333890 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:11:02.049016  333890 fix.go:54] fixHost starting: 
	I1213 09:11:02.049233  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:02.068302  333890 fix.go:112] recreateIfNeeded on embed-certs-379362: state=Stopped err=<nil>
	W1213 09:11:02.068327  333890 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 09:10:59.583124  328914 node_ready.go:57] node "default-k8s-diff-port-361270" has "Ready":"False" status (will retry)
	W1213 09:11:02.082475  328914 node_ready.go:57] node "default-k8s-diff-port-361270" has "Ready":"False" status (will retry)
	W1213 09:11:02.629196  323665 pod_ready.go:104] pod "coredns-7d764666f9-r95cr" is not "Ready", error: <nil>
	I1213 09:11:03.625367  323665 pod_ready.go:94] pod "coredns-7d764666f9-r95cr" is "Ready"
	I1213 09:11:03.625394  323665 pod_ready.go:86] duration metric: took 37.505010805s for pod "coredns-7d764666f9-r95cr" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.628034  323665 pod_ready.go:83] waiting for pod "etcd-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.631736  323665 pod_ready.go:94] pod "etcd-no-preload-291522" is "Ready"
	I1213 09:11:03.631760  323665 pod_ready.go:86] duration metric: took 3.705789ms for pod "etcd-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.633687  323665 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.637223  323665 pod_ready.go:94] pod "kube-apiserver-no-preload-291522" is "Ready"
	I1213 09:11:03.637246  323665 pod_ready.go:86] duration metric: took 3.541562ms for pod "kube-apiserver-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.638918  323665 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.823946  323665 pod_ready.go:94] pod "kube-controller-manager-no-preload-291522" is "Ready"
	I1213 09:11:03.823973  323665 pod_ready.go:86] duration metric: took 185.03756ms for pod "kube-controller-manager-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:04.024005  323665 pod_ready.go:83] waiting for pod "kube-proxy-ktgbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:04.424202  323665 pod_ready.go:94] pod "kube-proxy-ktgbz" is "Ready"
	I1213 09:11:04.424226  323665 pod_ready.go:86] duration metric: took 400.196554ms for pod "kube-proxy-ktgbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:04.624268  323665 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:05.023621  323665 pod_ready.go:94] pod "kube-scheduler-no-preload-291522" is "Ready"
	I1213 09:11:05.023647  323665 pod_ready.go:86] duration metric: took 399.354065ms for pod "kube-scheduler-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:05.023659  323665 pod_ready.go:40] duration metric: took 38.976009117s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:05.066541  323665 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 09:11:05.068302  323665 out.go:179] * Done! kubectl is now configured to use "no-preload-291522" cluster and "default" namespace by default
	I1213 09:11:02.070162  333890 out.go:252] * Restarting existing docker container for "embed-certs-379362" ...
	I1213 09:11:02.070221  333890 cli_runner.go:164] Run: docker start embed-certs-379362
	I1213 09:11:02.321118  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:02.339633  333890 kic.go:430] container "embed-certs-379362" state is running.
	I1213 09:11:02.340097  333890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-379362
	I1213 09:11:02.359827  333890 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/config.json ...
	I1213 09:11:02.360100  333890 machine.go:94] provisionDockerMachine start ...
	I1213 09:11:02.360192  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:02.380390  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:02.380635  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:02.380649  333890 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:11:02.381372  333890 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45890->127.0.0.1:33123: read: connection reset by peer
	I1213 09:11:05.518562  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-379362
	
	I1213 09:11:05.518593  333890 ubuntu.go:182] provisioning hostname "embed-certs-379362"
	I1213 09:11:05.518644  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:05.537736  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:05.538011  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:05.538026  333890 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-379362 && echo "embed-certs-379362" | sudo tee /etc/hostname
	I1213 09:11:05.683114  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-379362
	
	I1213 09:11:05.683217  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:05.702249  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:05.702628  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:05.702658  333890 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-379362' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-379362/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-379362' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:11:05.839172  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:11:05.839203  333890 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-5776/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-5776/.minikube}
	I1213 09:11:05.839221  333890 ubuntu.go:190] setting up certificates
	I1213 09:11:05.839232  333890 provision.go:84] configureAuth start
	I1213 09:11:05.839277  333890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-379362
	I1213 09:11:05.857894  333890 provision.go:143] copyHostCerts
	I1213 09:11:05.857989  333890 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem, removing ...
	I1213 09:11:05.858008  333890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem
	I1213 09:11:05.858077  333890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem (1675 bytes)
	I1213 09:11:05.858209  333890 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem, removing ...
	I1213 09:11:05.858219  333890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem
	I1213 09:11:05.858255  333890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem (1082 bytes)
	I1213 09:11:05.858308  333890 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem, removing ...
	I1213 09:11:05.858315  333890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem
	I1213 09:11:05.858338  333890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem (1123 bytes)
	I1213 09:11:05.858384  333890 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem org=jenkins.embed-certs-379362 san=[127.0.0.1 192.168.85.2 embed-certs-379362 localhost minikube]
	I1213 09:11:05.995748  333890 provision.go:177] copyRemoteCerts
	I1213 09:11:05.995808  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:11:05.995841  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.014933  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.113890  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:11:06.131828  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1213 09:11:06.149744  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:11:06.167004  333890 provision.go:87] duration metric: took 327.760831ms to configureAuth
	I1213 09:11:06.167034  333890 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:11:06.167248  333890 config.go:182] Loaded profile config "embed-certs-379362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:06.167371  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.186434  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:06.186700  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:06.186718  333890 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 09:11:06.519456  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 09:11:06.519500  333890 machine.go:97] duration metric: took 4.159363834s to provisionDockerMachine
	I1213 09:11:06.519515  333890 start.go:293] postStartSetup for "embed-certs-379362" (driver="docker")
	I1213 09:11:06.519528  333890 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:11:06.519593  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:11:06.519656  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.538380  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.634842  333890 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:11:06.638452  333890 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:11:06.638473  333890 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:11:06.638495  333890 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/addons for local assets ...
	I1213 09:11:06.638554  333890 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/files for local assets ...
	I1213 09:11:06.638653  333890 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem -> 93032.pem in /etc/ssl/certs
	I1213 09:11:06.638763  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:11:06.646671  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:11:06.664174  333890 start.go:296] duration metric: took 144.644973ms for postStartSetup
	I1213 09:11:06.664268  333890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:11:06.664305  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.683615  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.779502  333890 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:11:06.785404  333890 fix.go:56] duration metric: took 4.736380482s for fixHost
	I1213 09:11:06.785434  333890 start.go:83] releasing machines lock for "embed-certs-379362", held for 4.736428362s
	I1213 09:11:06.785524  333890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-379362
	I1213 09:11:06.808003  333890 ssh_runner.go:195] Run: cat /version.json
	I1213 09:11:06.808061  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.808078  333890 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:11:06.808172  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.833412  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.833605  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	W1213 09:11:02.373908  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	W1213 09:11:04.872547  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	W1213 09:11:06.873449  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	I1213 09:11:06.984735  333890 ssh_runner.go:195] Run: systemctl --version
	I1213 09:11:06.991583  333890 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 09:11:07.026938  333890 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:11:07.031772  333890 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:11:07.031840  333890 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:11:07.039992  333890 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 09:11:07.040013  333890 start.go:496] detecting cgroup driver to use...
	I1213 09:11:07.040046  333890 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 09:11:07.040090  333890 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:11:07.054785  333890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:11:07.068014  333890 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:11:07.068059  333890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:11:07.083003  333890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:11:07.096366  333890 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:11:07.183847  333890 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:11:07.269721  333890 docker.go:234] disabling docker service ...
	I1213 09:11:07.269771  333890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:11:07.285161  333890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:11:07.297389  333890 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:11:07.384882  333890 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:11:07.467142  333890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:11:07.481367  333890 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:11:07.495794  333890 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 09:11:07.495842  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.505016  333890 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 09:11:07.505072  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.514873  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.523864  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.532764  333890 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:11:07.541036  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.549898  333890 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.558670  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.568189  333890 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:11:07.575855  333890 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:11:07.582903  333890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:07.670568  333890 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 09:11:07.843644  333890 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 09:11:07.843715  333890 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 09:11:07.848433  333890 start.go:564] Will wait 60s for crictl version
	I1213 09:11:07.848528  333890 ssh_runner.go:195] Run: which crictl
	I1213 09:11:07.852256  333890 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:11:07.876837  333890 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 09:11:07.876932  333890 ssh_runner.go:195] Run: crio --version
	I1213 09:11:07.904955  333890 ssh_runner.go:195] Run: crio --version
	I1213 09:11:07.933896  333890 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1213 09:11:04.083292  328914 node_ready.go:57] node "default-k8s-diff-port-361270" has "Ready":"False" status (will retry)
	I1213 09:11:06.583127  328914 node_ready.go:49] node "default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:06.583165  328914 node_ready.go:38] duration metric: took 11.003480314s for node "default-k8s-diff-port-361270" to be "Ready" ...
	I1213 09:11:06.583181  328914 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:11:06.583231  328914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:11:06.594500  328914 api_server.go:72] duration metric: took 11.299110433s to wait for apiserver process to appear ...
	I1213 09:11:06.594525  328914 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:11:06.594541  328914 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1213 09:11:06.599417  328914 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1213 09:11:06.600336  328914 api_server.go:141] control plane version: v1.34.2
	I1213 09:11:06.600358  328914 api_server.go:131] duration metric: took 5.826824ms to wait for apiserver health ...
	I1213 09:11:06.600365  328914 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:11:06.603252  328914 system_pods.go:59] 8 kube-system pods found
	I1213 09:11:06.603278  328914 system_pods.go:61] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:06.603283  328914 system_pods.go:61] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:06.603289  328914 system_pods.go:61] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:06.603292  328914 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:06.603296  328914 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:06.603302  328914 system_pods.go:61] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:06.603305  328914 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:06.603310  328914 system_pods.go:61] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:06.603316  328914 system_pods.go:74] duration metric: took 2.9457ms to wait for pod list to return data ...
	I1213 09:11:06.603325  328914 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:11:06.605317  328914 default_sa.go:45] found service account: "default"
	I1213 09:11:06.605334  328914 default_sa.go:55] duration metric: took 2.001953ms for default service account to be created ...
	I1213 09:11:06.605341  328914 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 09:11:06.607611  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:06.607633  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:06.607645  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:06.607651  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:06.607654  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:06.607658  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:06.607662  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:06.607665  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:06.607669  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:06.607685  328914 retry.go:31] will retry after 272.651119ms: missing components: kube-dns
	I1213 09:11:06.885001  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:06.885038  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:06.885046  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:06.885055  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:06.885061  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:06.885067  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:06.885073  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:06.885078  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:06.885087  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:06.885109  328914 retry.go:31] will retry after 389.523569ms: missing components: kube-dns
	I1213 09:11:07.279258  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:07.279287  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:07.279293  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:07.279298  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:07.279302  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:07.279305  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:07.279308  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:07.279317  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:07.279322  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:07.279335  328914 retry.go:31] will retry after 448.006807ms: missing components: kube-dns
	I1213 09:11:07.732933  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:07.732978  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:07.732988  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:07.732997  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:07.733002  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:07.733008  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:07.733012  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:07.733016  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:07.733020  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:07.733031  328914 system_pods.go:126] duration metric: took 1.127684936s to wait for k8s-apps to be running ...
	I1213 09:11:07.733038  328914 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 09:11:07.733082  328914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:07.749643  328914 system_svc.go:56] duration metric: took 16.594824ms WaitForService to wait for kubelet
	I1213 09:11:07.749674  328914 kubeadm.go:587] duration metric: took 12.454300158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:11:07.749698  328914 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:11:07.752080  328914 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:11:07.752112  328914 node_conditions.go:123] node cpu capacity is 8
	I1213 09:11:07.752131  328914 node_conditions.go:105] duration metric: took 2.42792ms to run NodePressure ...
	I1213 09:11:07.752146  328914 start.go:242] waiting for startup goroutines ...
	I1213 09:11:07.752160  328914 start.go:247] waiting for cluster config update ...
	I1213 09:11:07.752173  328914 start.go:256] writing updated cluster config ...
	I1213 09:11:07.752508  328914 ssh_runner.go:195] Run: rm -f paused
	I1213 09:11:07.757523  328914 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:07.761238  328914 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xhjmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.766432  328914 pod_ready.go:94] pod "coredns-66bc5c9577-xhjmn" is "Ready"
	I1213 09:11:07.766458  328914 pod_ready.go:86] duration metric: took 5.192246ms for pod "coredns-66bc5c9577-xhjmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.832062  328914 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.840179  328914 pod_ready.go:94] pod "etcd-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:07.840203  328914 pod_ready.go:86] duration metric: took 8.11705ms for pod "etcd-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.842550  328914 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.846547  328914 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:07.846570  328914 pod_ready.go:86] duration metric: took 3.999501ms for pod "kube-apiserver-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.848547  328914 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.161326  328914 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:08.161349  328914 pod_ready.go:86] duration metric: took 312.780385ms for pod "kube-controller-manager-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.372943  324697 pod_ready.go:94] pod "coredns-5dd5756b68-g66tb" is "Ready"
	I1213 09:11:07.372967  324697 pod_ready.go:86] duration metric: took 39.505999616s for pod "coredns-5dd5756b68-g66tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.375663  324697 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.379892  324697 pod_ready.go:94] pod "etcd-old-k8s-version-234538" is "Ready"
	I1213 09:11:07.379916  324697 pod_ready.go:86] duration metric: took 4.234738ms for pod "etcd-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.382722  324697 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.386579  324697 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-234538" is "Ready"
	I1213 09:11:07.386602  324697 pod_ready.go:86] duration metric: took 3.859665ms for pod "kube-apiserver-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.388935  324697 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.570936  324697 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-234538" is "Ready"
	I1213 09:11:07.570963  324697 pod_ready.go:86] duration metric: took 182.006223ms for pod "kube-controller-manager-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.772324  324697 pod_ready.go:83] waiting for pod "kube-proxy-6bkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.173608  324697 pod_ready.go:94] pod "kube-proxy-6bkvj" is "Ready"
	I1213 09:11:08.173638  324697 pod_ready.go:86] duration metric: took 401.292694ms for pod "kube-proxy-6bkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.372409  324697 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.772063  324697 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-234538" is "Ready"
	I1213 09:11:08.772095  324697 pod_ready.go:86] duration metric: took 399.659792ms for pod "kube-scheduler-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.772110  324697 pod_ready.go:40] duration metric: took 40.909481149s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:08.832194  324697 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1213 09:11:08.834797  324697 out.go:203] 
	W1213 09:11:08.836008  324697 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1213 09:11:08.837190  324697 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1213 09:11:08.838445  324697 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-234538" cluster and "default" namespace by default
	I1213 09:11:07.935243  333890 cli_runner.go:164] Run: docker network inspect embed-certs-379362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:11:07.953455  333890 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 09:11:07.957554  333890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:11:07.968284  333890 kubeadm.go:884] updating cluster {Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:11:07.968419  333890 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:11:07.968476  333890 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:11:08.002674  333890 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:11:08.002700  333890 crio.go:433] Images already preloaded, skipping extraction
	I1213 09:11:08.002756  333890 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:11:08.028193  333890 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:11:08.028216  333890 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:11:08.028225  333890 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1213 09:11:08.028332  333890 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-379362 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:11:08.028403  333890 ssh_runner.go:195] Run: crio config
	I1213 09:11:08.074930  333890 cni.go:84] Creating CNI manager for ""
	I1213 09:11:08.074949  333890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:08.074961  333890 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:11:08.074981  333890 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-379362 NodeName:embed-certs-379362 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:11:08.075100  333890 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-379362"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:11:08.075176  333890 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 09:11:08.083542  333890 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:11:08.083624  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:11:08.091566  333890 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1213 09:11:08.104461  333890 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 09:11:08.117321  333890 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1213 09:11:08.130224  333890 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:11:08.134005  333890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:11:08.144074  333890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:08.224481  333890 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:11:08.245774  333890 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362 for IP: 192.168.85.2
	I1213 09:11:08.245792  333890 certs.go:195] generating shared ca certs ...
	I1213 09:11:08.245810  333890 certs.go:227] acquiring lock for ca certs: {Name:mk80892d6e61f0acf7b97550b52b3341a040901c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:08.245989  333890 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key
	I1213 09:11:08.246048  333890 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key
	I1213 09:11:08.246059  333890 certs.go:257] generating profile certs ...
	I1213 09:11:08.246147  333890 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/client.key
	I1213 09:11:08.246205  333890 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/apiserver.key.814e7b8a
	I1213 09:11:08.246246  333890 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/proxy-client.key
	I1213 09:11:08.246349  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem (1338 bytes)
	W1213 09:11:08.246386  333890 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303_empty.pem, impossibly tiny 0 bytes
	I1213 09:11:08.246398  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:11:08.246422  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:11:08.246445  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:11:08.246474  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem (1675 bytes)
	I1213 09:11:08.246555  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:11:08.247224  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:11:08.265750  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:11:08.284698  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:11:08.304326  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:11:08.329185  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1213 09:11:08.348060  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 09:11:08.365610  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:11:08.383456  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 09:11:08.400955  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /usr/share/ca-certificates/93032.pem (1708 bytes)
	I1213 09:11:08.418539  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:11:08.436393  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem --> /usr/share/ca-certificates/9303.pem (1338 bytes)
	I1213 09:11:08.454266  333890 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:11:08.466744  333890 ssh_runner.go:195] Run: openssl version
	I1213 09:11:08.473100  333890 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.480536  333890 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/93032.pem /etc/ssl/certs/93032.pem
	I1213 09:11:08.488383  333890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.492189  333890 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:37 /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.492239  333890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.529232  333890 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:11:08.537596  333890 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.545251  333890 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:11:08.552715  333890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.556579  333890 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.556629  333890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.600524  333890 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:11:08.608451  333890 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.616267  333890 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9303.pem /etc/ssl/certs/9303.pem
	I1213 09:11:08.624437  333890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.628633  333890 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:37 /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.628687  333890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.663783  333890 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:11:08.672093  333890 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:11:08.676012  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:11:08.714649  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:11:08.753817  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:11:08.802703  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:11:08.851736  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:11:08.921259  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:11:08.977170  333890 kubeadm.go:401] StartCluster: {Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:08.977291  333890 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 09:11:08.977362  333890 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:11:09.015784  333890 cri.go:89] found id: "4bc6623c8d51e745a13ec1bbde3156fa4a6306b57cced07bc50b9433f54b52ab"
	I1213 09:11:09.015811  333890 cri.go:89] found id: "be5f00248e70cd8cdd3aaa3d5a1222e8bf8bbfab76393d6a5892e2e4c34a2a74"
	I1213 09:11:09.015818  333890 cri.go:89] found id: "9f6e183787c3b40e4c300978c57f6aef4eb0fabeae2452bf40c81a0b7a5f096a"
	I1213 09:11:09.015825  333890 cri.go:89] found id: "4aa683e93939933e0c046128e063e112508837dfd7e3b3f413f70d5bccf4c6da"
	I1213 09:11:09.015829  333890 cri.go:89] found id: ""
	I1213 09:11:09.015875  333890 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 09:11:09.030638  333890 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:09Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:11:09.030704  333890 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:11:09.039128  333890 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 09:11:09.039178  333890 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 09:11:09.039248  333890 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 09:11:09.047141  333890 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:11:09.048055  333890 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-379362" does not appear in /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:09.048563  333890 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-5776/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-379362" cluster setting kubeconfig missing "embed-certs-379362" context setting]
	I1213 09:11:09.049221  333890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:09.050957  333890 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 09:11:09.059934  333890 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 09:11:09.059966  333890 kubeadm.go:602] duration metric: took 20.780797ms to restartPrimaryControlPlane
	I1213 09:11:09.059975  333890 kubeadm.go:403] duration metric: took 82.814517ms to StartCluster
	I1213 09:11:09.059992  333890 settings.go:142] acquiring lock: {Name:mkb7db33d439ccf76620dab7051026432361ebb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:09.060056  333890 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:09.062377  333890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:09.062685  333890 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:11:09.062757  333890 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 09:11:09.062848  333890 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-379362"
	I1213 09:11:09.062864  333890 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-379362"
	W1213 09:11:09.062872  333890 addons.go:248] addon storage-provisioner should already be in state true
	I1213 09:11:09.062901  333890 host.go:66] Checking if "embed-certs-379362" exists ...
	I1213 09:11:09.062909  333890 addons.go:70] Setting dashboard=true in profile "embed-certs-379362"
	I1213 09:11:09.062926  333890 addons.go:239] Setting addon dashboard=true in "embed-certs-379362"
	W1213 09:11:09.062935  333890 addons.go:248] addon dashboard should already be in state true
	I1213 09:11:09.062946  333890 config.go:182] Loaded profile config "embed-certs-379362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:09.062959  333890 host.go:66] Checking if "embed-certs-379362" exists ...
	I1213 09:11:09.062995  333890 addons.go:70] Setting default-storageclass=true in profile "embed-certs-379362"
	I1213 09:11:09.063010  333890 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-379362"
	I1213 09:11:09.063289  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.063415  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.063500  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.067611  333890 out.go:179] * Verifying Kubernetes components...
	I1213 09:11:09.069241  333890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:09.089368  333890 addons.go:239] Setting addon default-storageclass=true in "embed-certs-379362"
	W1213 09:11:09.089396  333890 addons.go:248] addon default-storageclass should already be in state true
	I1213 09:11:09.089421  333890 host.go:66] Checking if "embed-certs-379362" exists ...
	I1213 09:11:09.089959  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.091596  333890 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 09:11:09.091621  333890 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:11:09.094004  333890 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:11:09.094022  333890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 09:11:09.094036  333890 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 09:11:08.362204  328914 pod_ready.go:83] waiting for pod "kube-proxy-78nr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.762127  328914 pod_ready.go:94] pod "kube-proxy-78nr2" is "Ready"
	I1213 09:11:08.762159  328914 pod_ready.go:86] duration metric: took 399.931988ms for pod "kube-proxy-78nr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.963595  328914 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:09.362581  328914 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:09.362857  328914 pod_ready.go:86] duration metric: took 399.227137ms for pod "kube-scheduler-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:09.362881  328914 pod_ready.go:40] duration metric: took 1.60532416s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:09.427945  328914 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 09:11:09.429725  328914 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-361270" cluster and "default" namespace by default
	I1213 09:11:09.094083  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:09.094976  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 09:11:09.094990  333890 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 09:11:09.095048  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:09.122479  333890 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 09:11:09.122516  333890 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 09:11:09.122573  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:09.124934  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:09.126649  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:09.157673  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:09.240152  333890 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:11:09.256813  333890 node_ready.go:35] waiting up to 6m0s for node "embed-certs-379362" to be "Ready" ...
	I1213 09:11:09.266223  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 09:11:09.266249  333890 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 09:11:09.266409  333890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:11:09.280359  333890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 09:11:09.282762  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 09:11:09.282784  333890 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 09:11:09.306961  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 09:11:09.307019  333890 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 09:11:09.323015  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 09:11:09.323036  333890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 09:11:09.339143  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 09:11:09.339166  333890 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 09:11:09.367621  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 09:11:09.367646  333890 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 09:11:09.382705  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 09:11:09.382728  333890 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 09:11:09.398185  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 09:11:09.398219  333890 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 09:11:09.414356  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:11:09.414389  333890 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 09:11:09.430652  333890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:11:10.622141  333890 node_ready.go:49] node "embed-certs-379362" is "Ready"
	I1213 09:11:10.622177  333890 node_ready.go:38] duration metric: took 1.365330808s for node "embed-certs-379362" to be "Ready" ...
	I1213 09:11:10.622194  333890 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:11:10.622248  333890 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:11:11.141921  333890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.875483061s)
	I1213 09:11:11.141933  333890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.861538443s)
	I1213 09:11:11.142098  333890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.711411401s)
	I1213 09:11:11.142138  333890 api_server.go:72] duration metric: took 2.079421919s to wait for apiserver process to appear ...
	I1213 09:11:11.142151  333890 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:11:11.142170  333890 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 09:11:11.143945  333890 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-379362 addons enable metrics-server
	
	I1213 09:11:11.149734  333890 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:11:11.149761  333890 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:11:11.155576  333890 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 09:11:11.156748  333890 addons.go:530] duration metric: took 2.094000513s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1213 09:11:11.642554  333890 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 09:11:11.648040  333890 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:11:11.648073  333890 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:11:12.142953  333890 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 09:11:12.147533  333890 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1213 09:11:12.148602  333890 api_server.go:141] control plane version: v1.34.2
	I1213 09:11:12.148630  333890 api_server.go:131] duration metric: took 1.006470603s to wait for apiserver health ...
	I1213 09:11:12.148643  333890 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:11:12.152383  333890 system_pods.go:59] 8 kube-system pods found
	I1213 09:11:12.152411  333890 system_pods.go:61] "coredns-66bc5c9577-24vtj" [8986d496-b2cb-429d-80ec-2f326920e440] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:12.152418  333890 system_pods.go:61] "etcd-embed-certs-379362" [cfdea667-b08a-4d24-b7f4-0fe21dbc5388] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:11:12.152428  333890 system_pods.go:61] "kindnet-4vk4d" [23fa27ce-887f-4910-af8d-74b11ea2df32] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 09:11:12.152449  333890 system_pods.go:61] "kube-apiserver-embed-certs-379362" [24a409bb-590d-4ac2-9246-7dba3fc3f946] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:11:12.152462  333890 system_pods.go:61] "kube-controller-manager-embed-certs-379362" [77968fd1-b384-4df9-86bd-289d910ba778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:11:12.152469  333890 system_pods.go:61] "kube-proxy-zmtpb" [c6bfb114-7843-46f4-8244-db73b00b7e6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 09:11:12.152495  333890 system_pods.go:61] "kube-scheduler-embed-certs-379362" [eb180ea3-0cfe-44f4-a995-7612e63240ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:11:12.152526  333890 system_pods.go:61] "storage-provisioner" [937cc208-1949-4660-a328-292224786f1b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:12.152535  333890 system_pods.go:74] duration metric: took 3.881548ms to wait for pod list to return data ...
	I1213 09:11:12.152549  333890 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:11:12.155530  333890 default_sa.go:45] found service account: "default"
	I1213 09:11:12.155557  333890 default_sa.go:55] duration metric: took 3.001063ms for default service account to be created ...
	I1213 09:11:12.155568  333890 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 09:11:12.158432  333890 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:12.158455  333890 system_pods.go:89] "coredns-66bc5c9577-24vtj" [8986d496-b2cb-429d-80ec-2f326920e440] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:12.158463  333890 system_pods.go:89] "etcd-embed-certs-379362" [cfdea667-b08a-4d24-b7f4-0fe21dbc5388] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:11:12.158470  333890 system_pods.go:89] "kindnet-4vk4d" [23fa27ce-887f-4910-af8d-74b11ea2df32] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 09:11:12.158476  333890 system_pods.go:89] "kube-apiserver-embed-certs-379362" [24a409bb-590d-4ac2-9246-7dba3fc3f946] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:11:12.158520  333890 system_pods.go:89] "kube-controller-manager-embed-certs-379362" [77968fd1-b384-4df9-86bd-289d910ba778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:11:12.158534  333890 system_pods.go:89] "kube-proxy-zmtpb" [c6bfb114-7843-46f4-8244-db73b00b7e6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 09:11:12.158543  333890 system_pods.go:89] "kube-scheduler-embed-certs-379362" [eb180ea3-0cfe-44f4-a995-7612e63240ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:11:12.158551  333890 system_pods.go:89] "storage-provisioner" [937cc208-1949-4660-a328-292224786f1b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:12.158563  333890 system_pods.go:126] duration metric: took 2.988393ms to wait for k8s-apps to be running ...
	I1213 09:11:12.158571  333890 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 09:11:12.158615  333890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:12.172411  333890 system_svc.go:56] duration metric: took 13.834615ms WaitForService to wait for kubelet
	I1213 09:11:12.172438  333890 kubeadm.go:587] duration metric: took 3.109721475s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:11:12.172457  333890 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:11:12.175344  333890 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:11:12.175368  333890 node_conditions.go:123] node cpu capacity is 8
	I1213 09:11:12.175391  333890 node_conditions.go:105] duration metric: took 2.92165ms to run NodePressure ...
	I1213 09:11:12.175405  333890 start.go:242] waiting for startup goroutines ...
	I1213 09:11:12.175422  333890 start.go:247] waiting for cluster config update ...
	I1213 09:11:12.175436  333890 start.go:256] writing updated cluster config ...
	I1213 09:11:12.175704  333890 ssh_runner.go:195] Run: rm -f paused
	I1213 09:11:12.179850  333890 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:12.183357  333890 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-24vtj" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 09:11:14.188818  333890 pod_ready.go:104] pod "coredns-66bc5c9577-24vtj" is not "Ready", error: <nil>
	W1213 09:11:16.189566  333890 pod_ready.go:104] pod "coredns-66bc5c9577-24vtj" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 13 09:10:46 no-preload-291522 crio[564]: time="2025-12-13T09:10:46.921539627Z" level=info msg="Started container" PID=1747 containerID=4fad4fd0ffc0e0648a9a5aab233d764c71da60d3d1171c76e7502bb125041ad9 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn/dashboard-metrics-scraper id=fbe77b26-d117-44a3-b200-0491b88341b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=eaf8f7bf85318983838f277e9d03cce753c63e422473ead6071b9938352043d0
	Dec 13 09:10:47 no-preload-291522 crio[564]: time="2025-12-13T09:10:47.315437881Z" level=info msg="Removing container: d81e38d0922b7684c243e6a5582749e029095a3c94c8830409ecc2fa97a65f9d" id=6a93e2b7-0c56-42fa-8c66-48e7e9f82efd name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:10:47 no-preload-291522 crio[564]: time="2025-12-13T09:10:47.330007828Z" level=info msg="Removed container d81e38d0922b7684c243e6a5582749e029095a3c94c8830409ecc2fa97a65f9d: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn/dashboard-metrics-scraper" id=6a93e2b7-0c56-42fa-8c66-48e7e9f82efd name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.339683907Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7447075e-6322-4d59-904e-2e5790cd56ad name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.340716488Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e116b2bd-a778-462d-a567-da1634ca4f70 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.341796328Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b7939fcc-f5ae-47c0-b794-e7d4a027e644 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.341942103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.347192006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.347401142Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/728d74451a0a6e9dcf33ecedd64eb3350ea0bca0ce55156685cdb035e35a62c4/merged/etc/passwd: no such file or directory"
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.347435094Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/728d74451a0a6e9dcf33ecedd64eb3350ea0bca0ce55156685cdb035e35a62c4/merged/etc/group: no such file or directory"
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.348087069Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.374062045Z" level=info msg="Created container 68c6761f0f0510b2748c8962041babfa051aafcfed48ea5944ca38de9dce19f7: kube-system/storage-provisioner/storage-provisioner" id=b7939fcc-f5ae-47c0-b794-e7d4a027e644 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.374690965Z" level=info msg="Starting container: 68c6761f0f0510b2748c8962041babfa051aafcfed48ea5944ca38de9dce19f7" id=0e28b774-6e31-438d-94d1-6f983ff5a4cc name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:10:56 no-preload-291522 crio[564]: time="2025-12-13T09:10:56.376580285Z" level=info msg="Started container" PID=1761 containerID=68c6761f0f0510b2748c8962041babfa051aafcfed48ea5944ca38de9dce19f7 description=kube-system/storage-provisioner/storage-provisioner id=0e28b774-6e31-438d-94d1-6f983ff5a4cc name=/runtime.v1.RuntimeService/StartContainer sandboxID=5f16d0a7d1efed4699c19dad3f92bcc3524716f994eb3a3577a23fd153c3bde3
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.214111988Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=59e8c1ce-5b99-4dc1-a35e-539137239075 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.215661135Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b778d5bc-b38a-4d2d-8e29-43ecbfae54e7 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.216759204Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn/dashboard-metrics-scraper" id=7aff1f2f-e9bf-4537-b237-9ed30a75e35e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.216905778Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.22413067Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.224600883Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.256794914Z" level=info msg="Created container b568d53cbfbef98fb966be78aa157c961bd12f67f98178b212effe0afc2082ed: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn/dashboard-metrics-scraper" id=7aff1f2f-e9bf-4537-b237-9ed30a75e35e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.257578102Z" level=info msg="Starting container: b568d53cbfbef98fb966be78aa157c961bd12f67f98178b212effe0afc2082ed" id=4ad96453-1845-4f24-b5a6-2896eb190ff2 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.259659041Z" level=info msg="Started container" PID=1797 containerID=b568d53cbfbef98fb966be78aa157c961bd12f67f98178b212effe0afc2082ed description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn/dashboard-metrics-scraper id=4ad96453-1845-4f24-b5a6-2896eb190ff2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=eaf8f7bf85318983838f277e9d03cce753c63e422473ead6071b9938352043d0
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.377964463Z" level=info msg="Removing container: 4fad4fd0ffc0e0648a9a5aab233d764c71da60d3d1171c76e7502bb125041ad9" id=475c49f8-96a8-4d4e-8b41-f4b697163de3 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:11:10 no-preload-291522 crio[564]: time="2025-12-13T09:11:10.388778783Z" level=info msg="Removed container 4fad4fd0ffc0e0648a9a5aab233d764c71da60d3d1171c76e7502bb125041ad9: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn/dashboard-metrics-scraper" id=475c49f8-96a8-4d4e-8b41-f4b697163de3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b568d53cbfbef       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   eaf8f7bf85318       dashboard-metrics-scraper-867fb5f87b-nkzwn   kubernetes-dashboard
	68c6761f0f051       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   5f16d0a7d1efe       storage-provisioner                          kube-system
	a6598e1c508f2       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   50 seconds ago      Running             kubernetes-dashboard        0                   35fe07600471c       kubernetes-dashboard-b84665fb8-zg7qj         kubernetes-dashboard
	5bfd9401264d3       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           56 seconds ago      Running             coredns                     0                   a57609b7fa511       coredns-7d764666f9-r95cr                     kube-system
	b56e8734b820d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   d01a14a101cd1       busybox                                      default
	1db71287c16d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   5f16d0a7d1efe       storage-provisioner                          kube-system
	f2ff8aa0f7b65       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   8863556509a97       kindnet-sm6z6                                kube-system
	0cc2dea823087       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                           56 seconds ago      Running             kube-proxy                  0                   e212295a2e1ca       kube-proxy-ktgbz                             kube-system
	f8d1691eeb238       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                           59 seconds ago      Running             kube-apiserver              0                   621525de0c5ba       kube-apiserver-no-preload-291522             kube-system
	c1810afee5381       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                           59 seconds ago      Running             kube-controller-manager     0                   e03ab6d576b0a       kube-controller-manager-no-preload-291522    kube-system
	595a2d6f50c49       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           59 seconds ago      Running             etcd                        0                   48ad4cb4d9089       etcd-no-preload-291522                       kube-system
	c5352b1836776       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                           59 seconds ago      Running             kube-scheduler              0                   ede6d50750099       kube-scheduler-no-preload-291522             kube-system
	
	
	==> coredns [5bfd9401264d34d0b0d2eb0000b940d6ff3b6283b164ef83fed5d41fa173d160] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:55537 - 37493 "HINFO IN 7127739216525855412.8379753089678689580. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01848884s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-291522
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-291522
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=no-preload-291522
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_09_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:09:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-291522
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:11:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:11:15 +0000   Sat, 13 Dec 2025 09:09:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:11:15 +0000   Sat, 13 Dec 2025 09:09:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:11:15 +0000   Sat, 13 Dec 2025 09:09:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:11:15 +0000   Sat, 13 Dec 2025 09:09:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-291522
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                c2de4dd6-9253-460d-81e1-9ad6236c08d3
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-7d764666f9-r95cr                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-no-preload-291522                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-sm6z6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-291522              250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-291522     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-ktgbz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-291522              100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-nkzwn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-zg7qj          0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  111s  node-controller  Node no-preload-291522 event: Registered Node no-preload-291522 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node no-preload-291522 event: Registered Node no-preload-291522 in Controller
	
	
	==> dmesg <==
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[Dec13 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 92 30 31 19 d5 08 06
	[  +0.000699] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 37 88 97 25 dd 08 06
	[ +15.632521] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[ +20.216508] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 f7 62 38 81 f0 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[  +0.111008] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 c8 f7 8d 1a 5f 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[ +12.586108] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de cc 9f d9 2d 23 08 06
	[  +0.043792] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	[Dec13 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 62 87 e9 6b d3 37 08 06
	[  +0.000424] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	
	
	==> etcd [595a2d6f50c49e4264151a01e2cf2cd1d109e03af0557b967299dbbc387d9a26] <==
	{"level":"warn","ts":"2025-12-13T09:10:23.686901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.696746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.708271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.718264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.729842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.742410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.752042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.759165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.781893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.789320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.797941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.804841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:23.852955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55350","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-13T09:10:32.392270Z","caller":"traceutil/trace.go:172","msg":"trace[2069723077] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"112.562942ms","start":"2025-12-13T09:10:32.279685Z","end":"2025-12-13T09:10:32.392248Z","steps":["trace[2069723077] 'process raft request'  (duration: 112.419291ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T09:10:32.782926Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"295.475832ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766804244814047 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/endpointslices/kubernetes-dashboard/kubernetes-dashboard-nf4g4\" mod_revision:576 > success:<request_put:<key:\"/registry/endpointslices/kubernetes-dashboard/kubernetes-dashboard-nf4g4\" value_size:1283 >> failure:<request_range:<key:\"/registry/endpointslices/kubernetes-dashboard/kubernetes-dashboard-nf4g4\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-13T09:10:32.783043Z","caller":"traceutil/trace.go:172","msg":"trace[187860706] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"386.547288ms","start":"2025-12-13T09:10:32.396478Z","end":"2025-12-13T09:10:32.783025Z","steps":["trace[187860706] 'process raft request'  (duration: 90.459574ms)","trace[187860706] 'compare'  (duration: 295.367597ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T09:10:32.783157Z","caller":"traceutil/trace.go:172","msg":"trace[1466581601] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"386.636187ms","start":"2025-12-13T09:10:32.396508Z","end":"2025-12-13T09:10:32.783144Z","steps":["trace[1466581601] 'process raft request'  (duration: 386.525494ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:10:32.783119Z","caller":"traceutil/trace.go:172","msg":"trace[1711225685] linearizableReadLoop","detail":"{readStateIndex:614; appliedIndex:613; }","duration":"160.325249ms","start":"2025-12-13T09:10:32.622770Z","end":"2025-12-13T09:10:32.783095Z","steps":["trace[1711225685] 'read index received'  (duration: 42.997562ms)","trace[1711225685] 'applied index is now lower than readState.Index'  (duration: 117.325705ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T09:10:32.783224Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T09:10:32.396465Z","time spent":"386.724352ms","remote":"127.0.0.1:54536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":961,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" mod_revision:573 > success:<request_put:<key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" value_size:883 >> failure:<request_range:<key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" > >"}
	{"level":"warn","ts":"2025-12-13T09:10:32.783127Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T09:10:32.396456Z","time spent":"386.628828ms","remote":"127.0.0.1:54762","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1363,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kubernetes-dashboard/kubernetes-dashboard-nf4g4\" mod_revision:576 > success:<request_put:<key:\"/registry/endpointslices/kubernetes-dashboard/kubernetes-dashboard-nf4g4\" value_size:1283 >> failure:<request_range:<key:\"/registry/endpointslices/kubernetes-dashboard/kubernetes-dashboard-nf4g4\" > >"}
	{"level":"info","ts":"2025-12-13T09:10:32.783264Z","caller":"traceutil/trace.go:172","msg":"trace[203550004] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"385.852997ms","start":"2025-12-13T09:10:32.397402Z","end":"2025-12-13T09:10:32.783255Z","steps":["trace[203550004] 'process raft request'  (duration: 385.69971ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-13T09:10:32.783332Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-13T09:10:32.397391Z","time spent":"385.895316ms","remote":"127.0.0.1:55192","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3195,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" mod_revision:564 > success:<request_put:<key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" value_size:3114 >> failure:<request_range:<key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" > >"}
	{"level":"warn","ts":"2025-12-13T09:10:32.783380Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.612113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-r95cr\" limit:1 ","response":"range_response_count:1 size:5933"}
	{"level":"info","ts":"2025-12-13T09:10:32.783408Z","caller":"traceutil/trace.go:172","msg":"trace[19255925] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-r95cr; range_end:; response_count:1; response_revision:589; }","duration":"160.64375ms","start":"2025-12-13T09:10:32.622755Z","end":"2025-12-13T09:10:32.783399Z","steps":["trace[19255925] 'agreement among raft nodes before linearized reading'  (duration: 160.456585ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:10:33.646411Z","caller":"traceutil/trace.go:172","msg":"trace[1708146057] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"176.825454ms","start":"2025-12-13T09:10:33.469561Z","end":"2025-12-13T09:10:33.646387Z","steps":["trace[1708146057] 'process raft request'  (duration: 176.701965ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:11:22 up 53 min,  0 user,  load average: 3.93, 3.52, 2.36
	Linux no-preload-291522 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f2ff8aa0f7b65be2df303c832a34fdd6fb1cd31cf904c4955c07b3b3c73b8a8f] <==
	I1213 09:10:25.833200       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 09:10:25.833761       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1213 09:10:25.834042       1 main.go:148] setting mtu 1500 for CNI 
	I1213 09:10:25.834066       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 09:10:25.834096       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T09:10:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 09:10:26.188241       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 09:10:26.188269       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 09:10:26.188282       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 09:10:26.188526       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 09:10:26.588475       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 09:10:26.588583       1 metrics.go:72] Registering metrics
	I1213 09:10:26.588689       1 controller.go:711] "Syncing nftables rules"
	I1213 09:10:36.190611       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 09:10:36.190657       1 main.go:301] handling current node
	I1213 09:10:46.188678       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 09:10:46.188722       1 main.go:301] handling current node
	I1213 09:10:56.188816       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 09:10:56.188857       1 main.go:301] handling current node
	I1213 09:11:06.189350       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 09:11:06.189403       1 main.go:301] handling current node
	I1213 09:11:16.188458       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1213 09:11:16.188521       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f8d1691eeb23819007271a5f04b8b81699f4e145a11d54fec11f89910cce3eda] <==
	I1213 09:10:24.566361       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 09:10:24.572610       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 09:10:24.572696       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 09:10:24.572850       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 09:10:24.573362       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 09:10:24.564796       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 09:10:24.580623       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:24.582659       1 policy_source.go:248] refreshing policies
	I1213 09:10:24.565273       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 09:10:24.593576       1 cache.go:39] Caches are synced for autoregister controller
	E1213 09:10:24.594653       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1213 09:10:24.607726       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 09:10:24.654455       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:10:25.115219       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:10:25.158811       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:10:25.191168       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:10:25.201956       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:10:25.218477       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:10:25.292123       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.245.80"}
	I1213 09:10:25.307250       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.133.114"}
	I1213 09:10:25.369210       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1213 09:10:28.064741       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:10:28.309842       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:10:28.361665       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:10:28.361665       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c1810afee538180d88c84228d084c1882c4e4161efd0b381dfe49512b1daff51] <==
	I1213 09:10:27.668641       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.669037       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.669147       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.670510       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.670810       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.670872       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.670896       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671076       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671097       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671148       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671151       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671181       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671204       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671246       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.667899       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671308       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.668129       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.667902       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.671181       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.678232       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.679175       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:10:27.769006       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:27.769026       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 09:10:27.769033       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 09:10:27.779685       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [0cc2dea823087fae8ecd9ac5823e4c2ef2cd22c680ca6865c5debeb27a6c9b96] <==
	I1213 09:10:25.662309       1 server_linux.go:53] "Using iptables proxy"
	I1213 09:10:25.724433       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:10:25.825243       1 shared_informer.go:377] "Caches are synced"
	I1213 09:10:25.825279       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1213 09:10:25.825396       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:10:25.848558       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 09:10:25.848645       1 server_linux.go:136] "Using iptables Proxier"
	I1213 09:10:25.854350       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:10:25.854901       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 09:10:25.854987       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:10:25.857238       1 config.go:200] "Starting service config controller"
	I1213 09:10:25.857258       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:10:25.857285       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:10:25.857291       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:10:25.857376       1 config.go:309] "Starting node config controller"
	I1213 09:10:25.857392       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:10:25.857399       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:10:25.857307       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:10:25.858223       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:10:25.958056       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:10:25.958053       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:10:25.958319       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c5352b1836776c6b17ea7fec0581ec2ac4de137ad305bf9d95497fdf8f4fb634] <==
	I1213 09:10:22.975818       1 serving.go:386] Generated self-signed cert in-memory
	W1213 09:10:24.393746       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 09:10:24.393794       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 09:10:24.393808       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 09:10:24.393819       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 09:10:24.480933       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1213 09:10:24.481055       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:10:24.503756       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:10:24.503886       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:10:24.523256       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 09:10:24.523331       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 09:10:24.604634       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 13 09:10:40 no-preload-291522 kubelet[716]: E1213 09:10:40.616112     716 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-291522" containerName="kube-controller-manager"
	Dec 13 09:10:46 no-preload-291522 kubelet[716]: E1213 09:10:46.858228     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" containerName="dashboard-metrics-scraper"
	Dec 13 09:10:46 no-preload-291522 kubelet[716]: I1213 09:10:46.858268     716 scope.go:122] "RemoveContainer" containerID="d81e38d0922b7684c243e6a5582749e029095a3c94c8830409ecc2fa97a65f9d"
	Dec 13 09:10:47 no-preload-291522 kubelet[716]: I1213 09:10:47.314111     716 scope.go:122] "RemoveContainer" containerID="d81e38d0922b7684c243e6a5582749e029095a3c94c8830409ecc2fa97a65f9d"
	Dec 13 09:10:47 no-preload-291522 kubelet[716]: E1213 09:10:47.314299     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" containerName="dashboard-metrics-scraper"
	Dec 13 09:10:47 no-preload-291522 kubelet[716]: I1213 09:10:47.314333     716 scope.go:122] "RemoveContainer" containerID="4fad4fd0ffc0e0648a9a5aab233d764c71da60d3d1171c76e7502bb125041ad9"
	Dec 13 09:10:47 no-preload-291522 kubelet[716]: E1213 09:10:47.314544     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nkzwn_kubernetes-dashboard(4eb6b9cd-253c-45bc-8f7b-08ae4685b374)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" podUID="4eb6b9cd-253c-45bc-8f7b-08ae4685b374"
	Dec 13 09:10:56 no-preload-291522 kubelet[716]: I1213 09:10:56.339252     716 scope.go:122] "RemoveContainer" containerID="1db71287c16d63009a5f2de744adf19e79144e6950e9eab448fbbb3f35ae0e18"
	Dec 13 09:10:56 no-preload-291522 kubelet[716]: E1213 09:10:56.858259     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" containerName="dashboard-metrics-scraper"
	Dec 13 09:10:56 no-preload-291522 kubelet[716]: I1213 09:10:56.858305     716 scope.go:122] "RemoveContainer" containerID="4fad4fd0ffc0e0648a9a5aab233d764c71da60d3d1171c76e7502bb125041ad9"
	Dec 13 09:10:56 no-preload-291522 kubelet[716]: E1213 09:10:56.858542     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nkzwn_kubernetes-dashboard(4eb6b9cd-253c-45bc-8f7b-08ae4685b374)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" podUID="4eb6b9cd-253c-45bc-8f7b-08ae4685b374"
	Dec 13 09:11:03 no-preload-291522 kubelet[716]: E1213 09:11:03.468844     716 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-r95cr" containerName="coredns"
	Dec 13 09:11:10 no-preload-291522 kubelet[716]: E1213 09:11:10.212361     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" containerName="dashboard-metrics-scraper"
	Dec 13 09:11:10 no-preload-291522 kubelet[716]: I1213 09:11:10.212936     716 scope.go:122] "RemoveContainer" containerID="4fad4fd0ffc0e0648a9a5aab233d764c71da60d3d1171c76e7502bb125041ad9"
	Dec 13 09:11:10 no-preload-291522 kubelet[716]: I1213 09:11:10.376614     716 scope.go:122] "RemoveContainer" containerID="4fad4fd0ffc0e0648a9a5aab233d764c71da60d3d1171c76e7502bb125041ad9"
	Dec 13 09:11:10 no-preload-291522 kubelet[716]: E1213 09:11:10.376824     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" containerName="dashboard-metrics-scraper"
	Dec 13 09:11:10 no-preload-291522 kubelet[716]: I1213 09:11:10.376862     716 scope.go:122] "RemoveContainer" containerID="b568d53cbfbef98fb966be78aa157c961bd12f67f98178b212effe0afc2082ed"
	Dec 13 09:11:10 no-preload-291522 kubelet[716]: E1213 09:11:10.377057     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nkzwn_kubernetes-dashboard(4eb6b9cd-253c-45bc-8f7b-08ae4685b374)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" podUID="4eb6b9cd-253c-45bc-8f7b-08ae4685b374"
	Dec 13 09:11:16 no-preload-291522 kubelet[716]: E1213 09:11:16.858605     716 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" containerName="dashboard-metrics-scraper"
	Dec 13 09:11:16 no-preload-291522 kubelet[716]: I1213 09:11:16.858694     716 scope.go:122] "RemoveContainer" containerID="b568d53cbfbef98fb966be78aa157c961bd12f67f98178b212effe0afc2082ed"
	Dec 13 09:11:16 no-preload-291522 kubelet[716]: E1213 09:11:16.858934     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-nkzwn_kubernetes-dashboard(4eb6b9cd-253c-45bc-8f7b-08ae4685b374)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-nkzwn" podUID="4eb6b9cd-253c-45bc-8f7b-08ae4685b374"
	Dec 13 09:11:17 no-preload-291522 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 09:11:17 no-preload-291522 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 09:11:17 no-preload-291522 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:11:17 no-preload-291522 systemd[1]: kubelet.service: Consumed 1.819s CPU time.
	
	
	==> kubernetes-dashboard [a6598e1c508f2de6e4501253701b39af6be5786452d68184b46c10c6045bdba5] <==
	2025/12/13 09:10:31 Starting overwatch
	2025/12/13 09:10:31 Using namespace: kubernetes-dashboard
	2025/12/13 09:10:31 Using in-cluster config to connect to apiserver
	2025/12/13 09:10:31 Using secret token for csrf signing
	2025/12/13 09:10:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 09:10:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 09:10:31 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/13 09:10:31 Generating JWE encryption key
	2025/12/13 09:10:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 09:10:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 09:10:32 Initializing JWE encryption key from synchronized object
	2025/12/13 09:10:32 Creating in-cluster Sidecar client
	2025/12/13 09:10:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 09:10:32 Serving insecurely on HTTP port: 9090
	2025/12/13 09:11:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1db71287c16d63009a5f2de744adf19e79144e6950e9eab448fbbb3f35ae0e18] <==
	I1213 09:10:25.625832       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 09:10:55.628122       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [68c6761f0f0510b2748c8962041babfa051aafcfed48ea5944ca38de9dce19f7] <==
	I1213 09:10:56.388803       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:10:56.397014       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:10:56.397065       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 09:10:56.399091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:10:59.854574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:04.114788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:07.712924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:10.766418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:13.789062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:13.794560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:11:13.794708       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 09:11:13.794772       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7f21198f-5ecf-4114-b32b-88a1a9ef30f7", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-291522_c81c3ad3-9765-4b46-af33-9445c1408eea became leader
	I1213 09:11:13.794899       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-291522_c81c3ad3-9765-4b46-af33-9445c1408eea!
	W1213 09:11:13.796743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:13.799606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:11:13.895198       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-291522_c81c3ad3-9765-4b46-af33-9445c1408eea!
	W1213 09:11:15.803376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:15.807934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:17.812296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:17.817067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:19.820363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:19.825110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:21.829039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:21.835990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-291522 -n no-preload-291522
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-291522 -n no-preload-291522: exit status 2 (340.159215ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-291522 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-361270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-361270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (295.79354ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:18Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-361270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-361270 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-361270 describe deploy/metrics-server -n kube-system: exit status 1 (70.074873ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-361270 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-361270
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-361270:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122",
	        "Created": "2025-12-13T09:10:34.393520957Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 329720,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:10:34.581691223Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122/hostname",
	        "HostsPath": "/var/lib/docker/containers/33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122/hosts",
	        "LogPath": "/var/lib/docker/containers/33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122/33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122-json.log",
	        "Name": "/default-k8s-diff-port-361270",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-361270:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-361270",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122",
	                "LowerDir": "/var/lib/docker/overlay2/eaeb52f2095d7e5f8986a69d2edbe8afe0a205bb9fc051803008936187282ad8-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eaeb52f2095d7e5f8986a69d2edbe8afe0a205bb9fc051803008936187282ad8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eaeb52f2095d7e5f8986a69d2edbe8afe0a205bb9fc051803008936187282ad8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eaeb52f2095d7e5f8986a69d2edbe8afe0a205bb9fc051803008936187282ad8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-361270",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-361270/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-361270",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-361270",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-361270",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "2f2135e13fa66df5d156f70a68e767a184dc079eeee891989aca97ee1e5a461c",
	            "SandboxKey": "/var/run/docker/netns/2f2135e13fa6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-361270": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6564b4ae9b49d7064dcdd83dbabbc1dc234669ef37d48771177caa6ad8786081",
	                    "EndpointID": "8ac5ca1816c9f2d70f0a213396910bc13cd3f98097c22ba925b31081c199b7fa",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "6a:da:aa:ee:af:3a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-361270",
	                        "33e3412677dd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-361270 -n default-k8s-diff-port-361270
E1213 09:11:18.385890    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/auto-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-361270 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-361270 logs -n 25: (1.365831383s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-833990 sudo containerd config dump                                                                                                                                                                                                  │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo crio config                                                                                                                                                                                                             │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ delete  │ -p bridge-833990                                                                                                                                                                                                                              │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-291522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-234538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ stop    │ -p no-preload-291522 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:10 UTC │
	│ stop    │ -p old-k8s-version-234538 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ addons  │ enable dashboard -p no-preload-291522 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p no-preload-291522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-234538 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p old-k8s-version-234538 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p kubernetes-upgrade-814560                                                                                                                                                                                                                  │ kubernetes-upgrade-814560    │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ delete  │ -p disable-driver-mounts-779931                                                                                                                                                                                                               │ disable-driver-mounts-779931 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable metrics-server -p embed-certs-379362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │                     │
	│ stop    │ -p embed-certs-379362 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p embed-certs-379362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ image   │ no-preload-291522 image list --format=json                                                                                                                                                                                                    │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p no-preload-291522 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-361270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:11:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:11:01.859652  333890 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:01.859763  333890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:01.859768  333890 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:01.859780  333890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:01.860007  333890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:11:01.860461  333890 out.go:368] Setting JSON to false
	I1213 09:11:01.861836  333890 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3214,"bootTime":1765613848,"procs":357,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:11:01.861905  333890 start.go:143] virtualization: kvm guest
	I1213 09:11:01.863731  333890 out.go:179] * [embed-certs-379362] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:11:01.865249  333890 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:11:01.865281  333890 notify.go:221] Checking for updates...
	I1213 09:11:01.867359  333890 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:11:01.868519  333890 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:01.869842  333890 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:11:01.871012  333890 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:11:01.872143  333890 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:11:01.873683  333890 config.go:182] Loaded profile config "embed-certs-379362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:01.874233  333890 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:11:01.901548  333890 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 09:11:01.901656  333890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:01.959403  333890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 09:11:01.949301411 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:01.959565  333890 docker.go:319] overlay module found
	I1213 09:11:01.961826  333890 out.go:179] * Using the docker driver based on existing profile
	W1213 09:10:57.872528  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	W1213 09:10:59.873309  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	I1213 09:11:01.962862  333890 start.go:309] selected driver: docker
	I1213 09:11:01.962874  333890 start.go:927] validating driver "docker" against &{Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:01.962966  333890 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:11:01.963566  333890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:02.021259  333890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 09:11:02.010959916 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:02.021565  333890 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:11:02.021623  333890 cni.go:84] Creating CNI manager for ""
	I1213 09:11:02.021676  333890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:02.021713  333890 start.go:353] cluster config:
	{Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:02.023438  333890 out.go:179] * Starting "embed-certs-379362" primary control-plane node in "embed-certs-379362" cluster
	I1213 09:11:02.024571  333890 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 09:11:02.025856  333890 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:11:02.026959  333890 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:11:02.026992  333890 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 09:11:02.027007  333890 cache.go:65] Caching tarball of preloaded images
	I1213 09:11:02.027033  333890 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:11:02.027086  333890 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:11:02.027100  333890 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 09:11:02.027214  333890 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/config.json ...
	I1213 09:11:02.048858  333890 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:11:02.048877  333890 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:11:02.048892  333890 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:11:02.048922  333890 start.go:360] acquireMachinesLock for embed-certs-379362: {Name:mk2ae32cc4beadbba6a2e4810e36036ee6a949ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:11:02.048994  333890 start.go:364] duration metric: took 42.67µs to acquireMachinesLock for "embed-certs-379362"
	I1213 09:11:02.049011  333890 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:11:02.049016  333890 fix.go:54] fixHost starting: 
	I1213 09:11:02.049233  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:02.068302  333890 fix.go:112] recreateIfNeeded on embed-certs-379362: state=Stopped err=<nil>
	W1213 09:11:02.068327  333890 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 09:10:59.583124  328914 node_ready.go:57] node "default-k8s-diff-port-361270" has "Ready":"False" status (will retry)
	W1213 09:11:02.082475  328914 node_ready.go:57] node "default-k8s-diff-port-361270" has "Ready":"False" status (will retry)
	W1213 09:11:02.629196  323665 pod_ready.go:104] pod "coredns-7d764666f9-r95cr" is not "Ready", error: <nil>
	I1213 09:11:03.625367  323665 pod_ready.go:94] pod "coredns-7d764666f9-r95cr" is "Ready"
	I1213 09:11:03.625394  323665 pod_ready.go:86] duration metric: took 37.505010805s for pod "coredns-7d764666f9-r95cr" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.628034  323665 pod_ready.go:83] waiting for pod "etcd-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.631736  323665 pod_ready.go:94] pod "etcd-no-preload-291522" is "Ready"
	I1213 09:11:03.631760  323665 pod_ready.go:86] duration metric: took 3.705789ms for pod "etcd-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.633687  323665 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.637223  323665 pod_ready.go:94] pod "kube-apiserver-no-preload-291522" is "Ready"
	I1213 09:11:03.637246  323665 pod_ready.go:86] duration metric: took 3.541562ms for pod "kube-apiserver-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.638918  323665 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.823946  323665 pod_ready.go:94] pod "kube-controller-manager-no-preload-291522" is "Ready"
	I1213 09:11:03.823973  323665 pod_ready.go:86] duration metric: took 185.03756ms for pod "kube-controller-manager-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:04.024005  323665 pod_ready.go:83] waiting for pod "kube-proxy-ktgbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:04.424202  323665 pod_ready.go:94] pod "kube-proxy-ktgbz" is "Ready"
	I1213 09:11:04.424226  323665 pod_ready.go:86] duration metric: took 400.196554ms for pod "kube-proxy-ktgbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:04.624268  323665 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:05.023621  323665 pod_ready.go:94] pod "kube-scheduler-no-preload-291522" is "Ready"
	I1213 09:11:05.023647  323665 pod_ready.go:86] duration metric: took 399.354065ms for pod "kube-scheduler-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:05.023659  323665 pod_ready.go:40] duration metric: took 38.976009117s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:05.066541  323665 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 09:11:05.068302  323665 out.go:179] * Done! kubectl is now configured to use "no-preload-291522" cluster and "default" namespace by default
	I1213 09:11:02.070162  333890 out.go:252] * Restarting existing docker container for "embed-certs-379362" ...
	I1213 09:11:02.070221  333890 cli_runner.go:164] Run: docker start embed-certs-379362
	I1213 09:11:02.321118  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:02.339633  333890 kic.go:430] container "embed-certs-379362" state is running.
	I1213 09:11:02.340097  333890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-379362
	I1213 09:11:02.359827  333890 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/config.json ...
	I1213 09:11:02.360100  333890 machine.go:94] provisionDockerMachine start ...
	I1213 09:11:02.360192  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:02.380390  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:02.380635  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:02.380649  333890 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:11:02.381372  333890 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45890->127.0.0.1:33123: read: connection reset by peer
	I1213 09:11:05.518562  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-379362
	
	I1213 09:11:05.518593  333890 ubuntu.go:182] provisioning hostname "embed-certs-379362"
	I1213 09:11:05.518644  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:05.537736  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:05.538011  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:05.538026  333890 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-379362 && echo "embed-certs-379362" | sudo tee /etc/hostname
	I1213 09:11:05.683114  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-379362
	
	I1213 09:11:05.683217  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:05.702249  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:05.702628  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:05.702658  333890 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-379362' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-379362/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-379362' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:11:05.839172  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:11:05.839203  333890 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-5776/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-5776/.minikube}
	I1213 09:11:05.839221  333890 ubuntu.go:190] setting up certificates
	I1213 09:11:05.839232  333890 provision.go:84] configureAuth start
	I1213 09:11:05.839277  333890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-379362
	I1213 09:11:05.857894  333890 provision.go:143] copyHostCerts
	I1213 09:11:05.857989  333890 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem, removing ...
	I1213 09:11:05.858008  333890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem
	I1213 09:11:05.858077  333890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem (1675 bytes)
	I1213 09:11:05.858209  333890 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem, removing ...
	I1213 09:11:05.858219  333890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem
	I1213 09:11:05.858255  333890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem (1082 bytes)
	I1213 09:11:05.858308  333890 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem, removing ...
	I1213 09:11:05.858315  333890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem
	I1213 09:11:05.858338  333890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem (1123 bytes)
	I1213 09:11:05.858384  333890 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem org=jenkins.embed-certs-379362 san=[127.0.0.1 192.168.85.2 embed-certs-379362 localhost minikube]
	I1213 09:11:05.995748  333890 provision.go:177] copyRemoteCerts
	I1213 09:11:05.995808  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:11:05.995841  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.014933  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.113890  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:11:06.131828  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1213 09:11:06.149744  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:11:06.167004  333890 provision.go:87] duration metric: took 327.760831ms to configureAuth
	I1213 09:11:06.167034  333890 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:11:06.167248  333890 config.go:182] Loaded profile config "embed-certs-379362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:06.167371  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.186434  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:06.186700  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:06.186718  333890 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 09:11:06.519456  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 09:11:06.519500  333890 machine.go:97] duration metric: took 4.159363834s to provisionDockerMachine
	I1213 09:11:06.519515  333890 start.go:293] postStartSetup for "embed-certs-379362" (driver="docker")
	I1213 09:11:06.519528  333890 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:11:06.519593  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:11:06.519656  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.538380  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.634842  333890 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:11:06.638452  333890 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:11:06.638473  333890 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:11:06.638495  333890 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/addons for local assets ...
	I1213 09:11:06.638554  333890 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/files for local assets ...
	I1213 09:11:06.638653  333890 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem -> 93032.pem in /etc/ssl/certs
	I1213 09:11:06.638763  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:11:06.646671  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:11:06.664174  333890 start.go:296] duration metric: took 144.644973ms for postStartSetup
	I1213 09:11:06.664268  333890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:11:06.664305  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.683615  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.779502  333890 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:11:06.785404  333890 fix.go:56] duration metric: took 4.736380482s for fixHost
	I1213 09:11:06.785434  333890 start.go:83] releasing machines lock for "embed-certs-379362", held for 4.736428362s
	I1213 09:11:06.785524  333890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-379362
	I1213 09:11:06.808003  333890 ssh_runner.go:195] Run: cat /version.json
	I1213 09:11:06.808061  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.808078  333890 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:11:06.808172  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.833412  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.833605  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	W1213 09:11:02.373908  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	W1213 09:11:04.872547  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	W1213 09:11:06.873449  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	I1213 09:11:06.984735  333890 ssh_runner.go:195] Run: systemctl --version
	I1213 09:11:06.991583  333890 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 09:11:07.026938  333890 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:11:07.031772  333890 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:11:07.031840  333890 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:11:07.039992  333890 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 09:11:07.040013  333890 start.go:496] detecting cgroup driver to use...
	I1213 09:11:07.040046  333890 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 09:11:07.040090  333890 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:11:07.054785  333890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:11:07.068014  333890 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:11:07.068059  333890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:11:07.083003  333890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:11:07.096366  333890 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:11:07.183847  333890 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:11:07.269721  333890 docker.go:234] disabling docker service ...
	I1213 09:11:07.269771  333890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:11:07.285161  333890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:11:07.297389  333890 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:11:07.384882  333890 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:11:07.467142  333890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:11:07.481367  333890 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:11:07.495794  333890 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 09:11:07.495842  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.505016  333890 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 09:11:07.505072  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.514873  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.523864  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.532764  333890 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:11:07.541036  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.549898  333890 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.558670  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.568189  333890 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:11:07.575855  333890 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:11:07.582903  333890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:07.670568  333890 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 09:11:07.843644  333890 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 09:11:07.843715  333890 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 09:11:07.848433  333890 start.go:564] Will wait 60s for crictl version
	I1213 09:11:07.848528  333890 ssh_runner.go:195] Run: which crictl
	I1213 09:11:07.852256  333890 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:11:07.876837  333890 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 09:11:07.876932  333890 ssh_runner.go:195] Run: crio --version
	I1213 09:11:07.904955  333890 ssh_runner.go:195] Run: crio --version
	I1213 09:11:07.933896  333890 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1213 09:11:04.083292  328914 node_ready.go:57] node "default-k8s-diff-port-361270" has "Ready":"False" status (will retry)
	I1213 09:11:06.583127  328914 node_ready.go:49] node "default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:06.583165  328914 node_ready.go:38] duration metric: took 11.003480314s for node "default-k8s-diff-port-361270" to be "Ready" ...
	I1213 09:11:06.583181  328914 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:11:06.583231  328914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:11:06.594500  328914 api_server.go:72] duration metric: took 11.299110433s to wait for apiserver process to appear ...
	I1213 09:11:06.594525  328914 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:11:06.594541  328914 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1213 09:11:06.599417  328914 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1213 09:11:06.600336  328914 api_server.go:141] control plane version: v1.34.2
	I1213 09:11:06.600358  328914 api_server.go:131] duration metric: took 5.826824ms to wait for apiserver health ...
	I1213 09:11:06.600365  328914 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:11:06.603252  328914 system_pods.go:59] 8 kube-system pods found
	I1213 09:11:06.603278  328914 system_pods.go:61] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:06.603283  328914 system_pods.go:61] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:06.603289  328914 system_pods.go:61] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:06.603292  328914 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:06.603296  328914 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:06.603302  328914 system_pods.go:61] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:06.603305  328914 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:06.603310  328914 system_pods.go:61] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:06.603316  328914 system_pods.go:74] duration metric: took 2.9457ms to wait for pod list to return data ...
	I1213 09:11:06.603325  328914 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:11:06.605317  328914 default_sa.go:45] found service account: "default"
	I1213 09:11:06.605334  328914 default_sa.go:55] duration metric: took 2.001953ms for default service account to be created ...
	I1213 09:11:06.605341  328914 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 09:11:06.607611  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:06.607633  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:06.607645  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:06.607651  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:06.607654  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:06.607658  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:06.607662  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:06.607665  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:06.607669  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:06.607685  328914 retry.go:31] will retry after 272.651119ms: missing components: kube-dns
	I1213 09:11:06.885001  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:06.885038  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:06.885046  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:06.885055  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:06.885061  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:06.885067  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:06.885073  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:06.885078  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:06.885087  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:06.885109  328914 retry.go:31] will retry after 389.523569ms: missing components: kube-dns
	I1213 09:11:07.279258  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:07.279287  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:07.279293  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:07.279298  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:07.279302  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:07.279305  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:07.279308  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:07.279317  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:07.279322  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:07.279335  328914 retry.go:31] will retry after 448.006807ms: missing components: kube-dns
	I1213 09:11:07.732933  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:07.732978  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:07.732988  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:07.732997  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:07.733002  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:07.733008  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:07.733012  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:07.733016  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:07.733020  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:07.733031  328914 system_pods.go:126] duration metric: took 1.127684936s to wait for k8s-apps to be running ...
	I1213 09:11:07.733038  328914 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 09:11:07.733082  328914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:07.749643  328914 system_svc.go:56] duration metric: took 16.594824ms WaitForService to wait for kubelet
	I1213 09:11:07.749674  328914 kubeadm.go:587] duration metric: took 12.454300158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:11:07.749698  328914 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:11:07.752080  328914 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:11:07.752112  328914 node_conditions.go:123] node cpu capacity is 8
	I1213 09:11:07.752131  328914 node_conditions.go:105] duration metric: took 2.42792ms to run NodePressure ...
	I1213 09:11:07.752146  328914 start.go:242] waiting for startup goroutines ...
	I1213 09:11:07.752160  328914 start.go:247] waiting for cluster config update ...
	I1213 09:11:07.752173  328914 start.go:256] writing updated cluster config ...
	I1213 09:11:07.752508  328914 ssh_runner.go:195] Run: rm -f paused
	I1213 09:11:07.757523  328914 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:07.761238  328914 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xhjmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.766432  328914 pod_ready.go:94] pod "coredns-66bc5c9577-xhjmn" is "Ready"
	I1213 09:11:07.766458  328914 pod_ready.go:86] duration metric: took 5.192246ms for pod "coredns-66bc5c9577-xhjmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.832062  328914 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.840179  328914 pod_ready.go:94] pod "etcd-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:07.840203  328914 pod_ready.go:86] duration metric: took 8.11705ms for pod "etcd-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.842550  328914 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.846547  328914 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:07.846570  328914 pod_ready.go:86] duration metric: took 3.999501ms for pod "kube-apiserver-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.848547  328914 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.161326  328914 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:08.161349  328914 pod_ready.go:86] duration metric: took 312.780385ms for pod "kube-controller-manager-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.372943  324697 pod_ready.go:94] pod "coredns-5dd5756b68-g66tb" is "Ready"
	I1213 09:11:07.372967  324697 pod_ready.go:86] duration metric: took 39.505999616s for pod "coredns-5dd5756b68-g66tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.375663  324697 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.379892  324697 pod_ready.go:94] pod "etcd-old-k8s-version-234538" is "Ready"
	I1213 09:11:07.379916  324697 pod_ready.go:86] duration metric: took 4.234738ms for pod "etcd-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.382722  324697 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.386579  324697 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-234538" is "Ready"
	I1213 09:11:07.386602  324697 pod_ready.go:86] duration metric: took 3.859665ms for pod "kube-apiserver-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.388935  324697 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.570936  324697 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-234538" is "Ready"
	I1213 09:11:07.570963  324697 pod_ready.go:86] duration metric: took 182.006223ms for pod "kube-controller-manager-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.772324  324697 pod_ready.go:83] waiting for pod "kube-proxy-6bkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.173608  324697 pod_ready.go:94] pod "kube-proxy-6bkvj" is "Ready"
	I1213 09:11:08.173638  324697 pod_ready.go:86] duration metric: took 401.292694ms for pod "kube-proxy-6bkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.372409  324697 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.772063  324697 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-234538" is "Ready"
	I1213 09:11:08.772095  324697 pod_ready.go:86] duration metric: took 399.659792ms for pod "kube-scheduler-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.772110  324697 pod_ready.go:40] duration metric: took 40.909481149s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:08.832194  324697 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1213 09:11:08.834797  324697 out.go:203] 
	W1213 09:11:08.836008  324697 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1213 09:11:08.837190  324697 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1213 09:11:08.838445  324697 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-234538" cluster and "default" namespace by default
	I1213 09:11:07.935243  333890 cli_runner.go:164] Run: docker network inspect embed-certs-379362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:11:07.953455  333890 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 09:11:07.957554  333890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:11:07.968284  333890 kubeadm.go:884] updating cluster {Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:11:07.968419  333890 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:11:07.968476  333890 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:11:08.002674  333890 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:11:08.002700  333890 crio.go:433] Images already preloaded, skipping extraction
	I1213 09:11:08.002756  333890 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:11:08.028193  333890 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:11:08.028216  333890 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:11:08.028225  333890 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1213 09:11:08.028332  333890 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-379362 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:11:08.028403  333890 ssh_runner.go:195] Run: crio config
	I1213 09:11:08.074930  333890 cni.go:84] Creating CNI manager for ""
	I1213 09:11:08.074949  333890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:08.074961  333890 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:11:08.074981  333890 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-379362 NodeName:embed-certs-379362 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:11:08.075100  333890 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-379362"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:11:08.075176  333890 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 09:11:08.083542  333890 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:11:08.083624  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:11:08.091566  333890 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1213 09:11:08.104461  333890 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 09:11:08.117321  333890 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1213 09:11:08.130224  333890 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:11:08.134005  333890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:11:08.144074  333890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:08.224481  333890 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:11:08.245774  333890 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362 for IP: 192.168.85.2
	I1213 09:11:08.245792  333890 certs.go:195] generating shared ca certs ...
	I1213 09:11:08.245810  333890 certs.go:227] acquiring lock for ca certs: {Name:mk80892d6e61f0acf7b97550b52b3341a040901c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:08.245989  333890 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key
	I1213 09:11:08.246048  333890 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key
	I1213 09:11:08.246059  333890 certs.go:257] generating profile certs ...
	I1213 09:11:08.246147  333890 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/client.key
	I1213 09:11:08.246205  333890 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/apiserver.key.814e7b8a
	I1213 09:11:08.246246  333890 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/proxy-client.key
	I1213 09:11:08.246349  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem (1338 bytes)
	W1213 09:11:08.246386  333890 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303_empty.pem, impossibly tiny 0 bytes
	I1213 09:11:08.246398  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:11:08.246422  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:11:08.246445  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:11:08.246474  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem (1675 bytes)
	I1213 09:11:08.246555  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:11:08.247224  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:11:08.265750  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:11:08.284698  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:11:08.304326  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:11:08.329185  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1213 09:11:08.348060  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 09:11:08.365610  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:11:08.383456  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 09:11:08.400955  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /usr/share/ca-certificates/93032.pem (1708 bytes)
	I1213 09:11:08.418539  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:11:08.436393  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem --> /usr/share/ca-certificates/9303.pem (1338 bytes)
	I1213 09:11:08.454266  333890 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:11:08.466744  333890 ssh_runner.go:195] Run: openssl version
	I1213 09:11:08.473100  333890 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.480536  333890 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/93032.pem /etc/ssl/certs/93032.pem
	I1213 09:11:08.488383  333890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.492189  333890 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:37 /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.492239  333890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.529232  333890 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:11:08.537596  333890 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.545251  333890 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:11:08.552715  333890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.556579  333890 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.556629  333890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.600524  333890 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:11:08.608451  333890 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.616267  333890 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9303.pem /etc/ssl/certs/9303.pem
	I1213 09:11:08.624437  333890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.628633  333890 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:37 /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.628687  333890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.663783  333890 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:11:08.672093  333890 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:11:08.676012  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:11:08.714649  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:11:08.753817  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:11:08.802703  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:11:08.851736  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:11:08.921259  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:11:08.977170  333890 kubeadm.go:401] StartCluster: {Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:08.977291  333890 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 09:11:08.977362  333890 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:11:09.015784  333890 cri.go:89] found id: "4bc6623c8d51e745a13ec1bbde3156fa4a6306b57cced07bc50b9433f54b52ab"
	I1213 09:11:09.015811  333890 cri.go:89] found id: "be5f00248e70cd8cdd3aaa3d5a1222e8bf8bbfab76393d6a5892e2e4c34a2a74"
	I1213 09:11:09.015818  333890 cri.go:89] found id: "9f6e183787c3b40e4c300978c57f6aef4eb0fabeae2452bf40c81a0b7a5f096a"
	I1213 09:11:09.015825  333890 cri.go:89] found id: "4aa683e93939933e0c046128e063e112508837dfd7e3b3f413f70d5bccf4c6da"
	I1213 09:11:09.015829  333890 cri.go:89] found id: ""
	I1213 09:11:09.015875  333890 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 09:11:09.030638  333890 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:09Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:11:09.030704  333890 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:11:09.039128  333890 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 09:11:09.039178  333890 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 09:11:09.039248  333890 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 09:11:09.047141  333890 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:11:09.048055  333890 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-379362" does not appear in /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:09.048563  333890 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-5776/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-379362" cluster setting kubeconfig missing "embed-certs-379362" context setting]
	I1213 09:11:09.049221  333890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:09.050957  333890 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 09:11:09.059934  333890 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 09:11:09.059966  333890 kubeadm.go:602] duration metric: took 20.780797ms to restartPrimaryControlPlane
	I1213 09:11:09.059975  333890 kubeadm.go:403] duration metric: took 82.814517ms to StartCluster
	I1213 09:11:09.059992  333890 settings.go:142] acquiring lock: {Name:mkb7db33d439ccf76620dab7051026432361ebb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:09.060056  333890 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:09.062377  333890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:09.062685  333890 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:11:09.062757  333890 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 09:11:09.062848  333890 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-379362"
	I1213 09:11:09.062864  333890 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-379362"
	W1213 09:11:09.062872  333890 addons.go:248] addon storage-provisioner should already be in state true
	I1213 09:11:09.062901  333890 host.go:66] Checking if "embed-certs-379362" exists ...
	I1213 09:11:09.062909  333890 addons.go:70] Setting dashboard=true in profile "embed-certs-379362"
	I1213 09:11:09.062926  333890 addons.go:239] Setting addon dashboard=true in "embed-certs-379362"
	W1213 09:11:09.062935  333890 addons.go:248] addon dashboard should already be in state true
	I1213 09:11:09.062946  333890 config.go:182] Loaded profile config "embed-certs-379362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:09.062959  333890 host.go:66] Checking if "embed-certs-379362" exists ...
	I1213 09:11:09.062995  333890 addons.go:70] Setting default-storageclass=true in profile "embed-certs-379362"
	I1213 09:11:09.063010  333890 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-379362"
	I1213 09:11:09.063289  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.063415  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.063500  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.067611  333890 out.go:179] * Verifying Kubernetes components...
	I1213 09:11:09.069241  333890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:09.089368  333890 addons.go:239] Setting addon default-storageclass=true in "embed-certs-379362"
	W1213 09:11:09.089396  333890 addons.go:248] addon default-storageclass should already be in state true
	I1213 09:11:09.089421  333890 host.go:66] Checking if "embed-certs-379362" exists ...
	I1213 09:11:09.089959  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.091596  333890 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 09:11:09.091621  333890 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:11:09.094004  333890 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:11:09.094022  333890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 09:11:09.094036  333890 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 09:11:08.362204  328914 pod_ready.go:83] waiting for pod "kube-proxy-78nr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.762127  328914 pod_ready.go:94] pod "kube-proxy-78nr2" is "Ready"
	I1213 09:11:08.762159  328914 pod_ready.go:86] duration metric: took 399.931988ms for pod "kube-proxy-78nr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.963595  328914 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:09.362581  328914 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:09.362857  328914 pod_ready.go:86] duration metric: took 399.227137ms for pod "kube-scheduler-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:09.362881  328914 pod_ready.go:40] duration metric: took 1.60532416s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:09.427945  328914 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 09:11:09.429725  328914 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-361270" cluster and "default" namespace by default
	I1213 09:11:09.094083  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:09.094976  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 09:11:09.094990  333890 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 09:11:09.095048  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:09.122479  333890 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 09:11:09.122516  333890 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 09:11:09.122573  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:09.124934  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:09.126649  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:09.157673  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:09.240152  333890 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:11:09.256813  333890 node_ready.go:35] waiting up to 6m0s for node "embed-certs-379362" to be "Ready" ...
	I1213 09:11:09.266223  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 09:11:09.266249  333890 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 09:11:09.266409  333890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:11:09.280359  333890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 09:11:09.282762  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 09:11:09.282784  333890 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 09:11:09.306961  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 09:11:09.307019  333890 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 09:11:09.323015  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 09:11:09.323036  333890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 09:11:09.339143  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 09:11:09.339166  333890 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 09:11:09.367621  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 09:11:09.367646  333890 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 09:11:09.382705  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 09:11:09.382728  333890 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 09:11:09.398185  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 09:11:09.398219  333890 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 09:11:09.414356  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:11:09.414389  333890 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 09:11:09.430652  333890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:11:10.622141  333890 node_ready.go:49] node "embed-certs-379362" is "Ready"
	I1213 09:11:10.622177  333890 node_ready.go:38] duration metric: took 1.365330808s for node "embed-certs-379362" to be "Ready" ...
	I1213 09:11:10.622194  333890 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:11:10.622248  333890 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:11:11.141921  333890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.875483061s)
	I1213 09:11:11.141933  333890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.861538443s)
	I1213 09:11:11.142098  333890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.711411401s)
	I1213 09:11:11.142138  333890 api_server.go:72] duration metric: took 2.079421919s to wait for apiserver process to appear ...
	I1213 09:11:11.142151  333890 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:11:11.142170  333890 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 09:11:11.143945  333890 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-379362 addons enable metrics-server
	
	I1213 09:11:11.149734  333890 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:11:11.149761  333890 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:11:11.155576  333890 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 09:11:11.156748  333890 addons.go:530] duration metric: took 2.094000513s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1213 09:11:11.642554  333890 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 09:11:11.648040  333890 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:11:11.648073  333890 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:11:12.142953  333890 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 09:11:12.147533  333890 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1213 09:11:12.148602  333890 api_server.go:141] control plane version: v1.34.2
	I1213 09:11:12.148630  333890 api_server.go:131] duration metric: took 1.006470603s to wait for apiserver health ...
	I1213 09:11:12.148643  333890 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:11:12.152383  333890 system_pods.go:59] 8 kube-system pods found
	I1213 09:11:12.152411  333890 system_pods.go:61] "coredns-66bc5c9577-24vtj" [8986d496-b2cb-429d-80ec-2f326920e440] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:12.152418  333890 system_pods.go:61] "etcd-embed-certs-379362" [cfdea667-b08a-4d24-b7f4-0fe21dbc5388] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:11:12.152428  333890 system_pods.go:61] "kindnet-4vk4d" [23fa27ce-887f-4910-af8d-74b11ea2df32] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 09:11:12.152449  333890 system_pods.go:61] "kube-apiserver-embed-certs-379362" [24a409bb-590d-4ac2-9246-7dba3fc3f946] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:11:12.152462  333890 system_pods.go:61] "kube-controller-manager-embed-certs-379362" [77968fd1-b384-4df9-86bd-289d910ba778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:11:12.152469  333890 system_pods.go:61] "kube-proxy-zmtpb" [c6bfb114-7843-46f4-8244-db73b00b7e6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 09:11:12.152495  333890 system_pods.go:61] "kube-scheduler-embed-certs-379362" [eb180ea3-0cfe-44f4-a995-7612e63240ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:11:12.152526  333890 system_pods.go:61] "storage-provisioner" [937cc208-1949-4660-a328-292224786f1b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:12.152535  333890 system_pods.go:74] duration metric: took 3.881548ms to wait for pod list to return data ...
	I1213 09:11:12.152549  333890 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:11:12.155530  333890 default_sa.go:45] found service account: "default"
	I1213 09:11:12.155557  333890 default_sa.go:55] duration metric: took 3.001063ms for default service account to be created ...
	I1213 09:11:12.155568  333890 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 09:11:12.158432  333890 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:12.158455  333890 system_pods.go:89] "coredns-66bc5c9577-24vtj" [8986d496-b2cb-429d-80ec-2f326920e440] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:12.158463  333890 system_pods.go:89] "etcd-embed-certs-379362" [cfdea667-b08a-4d24-b7f4-0fe21dbc5388] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:11:12.158470  333890 system_pods.go:89] "kindnet-4vk4d" [23fa27ce-887f-4910-af8d-74b11ea2df32] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 09:11:12.158476  333890 system_pods.go:89] "kube-apiserver-embed-certs-379362" [24a409bb-590d-4ac2-9246-7dba3fc3f946] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:11:12.158520  333890 system_pods.go:89] "kube-controller-manager-embed-certs-379362" [77968fd1-b384-4df9-86bd-289d910ba778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:11:12.158534  333890 system_pods.go:89] "kube-proxy-zmtpb" [c6bfb114-7843-46f4-8244-db73b00b7e6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 09:11:12.158543  333890 system_pods.go:89] "kube-scheduler-embed-certs-379362" [eb180ea3-0cfe-44f4-a995-7612e63240ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:11:12.158551  333890 system_pods.go:89] "storage-provisioner" [937cc208-1949-4660-a328-292224786f1b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:12.158563  333890 system_pods.go:126] duration metric: took 2.988393ms to wait for k8s-apps to be running ...
	I1213 09:11:12.158571  333890 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 09:11:12.158615  333890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:12.172411  333890 system_svc.go:56] duration metric: took 13.834615ms WaitForService to wait for kubelet
	I1213 09:11:12.172438  333890 kubeadm.go:587] duration metric: took 3.109721475s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:11:12.172457  333890 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:11:12.175344  333890 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:11:12.175368  333890 node_conditions.go:123] node cpu capacity is 8
	I1213 09:11:12.175391  333890 node_conditions.go:105] duration metric: took 2.92165ms to run NodePressure ...
	I1213 09:11:12.175405  333890 start.go:242] waiting for startup goroutines ...
	I1213 09:11:12.175422  333890 start.go:247] waiting for cluster config update ...
	I1213 09:11:12.175436  333890 start.go:256] writing updated cluster config ...
	I1213 09:11:12.175704  333890 ssh_runner.go:195] Run: rm -f paused
	I1213 09:11:12.179850  333890 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:12.183357  333890 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-24vtj" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 09:11:14.188818  333890 pod_ready.go:104] pod "coredns-66bc5c9577-24vtj" is not "Ready", error: <nil>
	W1213 09:11:16.189566  333890 pod_ready.go:104] pod "coredns-66bc5c9577-24vtj" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 13 09:11:06 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:06.833525507Z" level=info msg="Starting container: ec961112c19b55ad085ecff7af88a8d7c02cf82b5337f463a0ce6d0fd675bb0e" id=659f1448-40a0-4460-857f-c1dfd85a745f name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:11:06 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:06.835888762Z" level=info msg="Started container" PID=1853 containerID=ec961112c19b55ad085ecff7af88a8d7c02cf82b5337f463a0ce6d0fd675bb0e description=kube-system/coredns-66bc5c9577-xhjmn/coredns id=659f1448-40a0-4460-857f-c1dfd85a745f name=/runtime.v1.RuntimeService/StartContainer sandboxID=775fda0ab536228910dee6f33dc65694826912ba1d9b18832e5327009a106b40
	Dec 13 09:11:09 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:09.997934117Z" level=info msg="Running pod sandbox: default/busybox/POD" id=5f2cd489-8391-4724-a1c5-0a48bb1ce4bd name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:11:09 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:09.998024453Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:10 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:10.003729555Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a5018c91e6836863f2122880a4f7b1092682f11eb9377323537424481f137e01 UID:5d3bc9d7-ad91-4181-95e2-346452464325 NetNS:/var/run/netns/79d7cd77-ebf2-4728-9508-cb2ed22a46b3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009963b0}] Aliases:map[]}"
	Dec 13 09:11:10 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:10.003767019Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 13 09:11:10 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:10.015452937Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a5018c91e6836863f2122880a4f7b1092682f11eb9377323537424481f137e01 UID:5d3bc9d7-ad91-4181-95e2-346452464325 NetNS:/var/run/netns/79d7cd77-ebf2-4728-9508-cb2ed22a46b3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009963b0}] Aliases:map[]}"
	Dec 13 09:11:10 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:10.0156229Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 13 09:11:10 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:10.016342867Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 09:11:10 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:10.017255971Z" level=info msg="Ran pod sandbox a5018c91e6836863f2122880a4f7b1092682f11eb9377323537424481f137e01 with infra container: default/busybox/POD" id=5f2cd489-8391-4724-a1c5-0a48bb1ce4bd name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:11:10 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:10.018533572Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=56400724-537c-4d2d-b3fb-170afda2a7f8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:10 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:10.018676807Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=56400724-537c-4d2d-b3fb-170afda2a7f8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:10 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:10.018726386Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=56400724-537c-4d2d-b3fb-170afda2a7f8 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:10 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:10.01954962Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2672fa57-6740-4ea9-8563-c667a7d2f5d7 name=/runtime.v1.ImageService/PullImage
	Dec 13 09:11:10 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:10.021345575Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 13 09:11:11 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:11.298810533Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=2672fa57-6740-4ea9-8563-c667a7d2f5d7 name=/runtime.v1.ImageService/PullImage
	Dec 13 09:11:11 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:11.299582059Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=823f9ad1-c7c6-4b87-9bef-93d01564e89f name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:11 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:11.300930929Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=64d6357a-6802-4940-ba6b-637cbab40da0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:11 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:11.304609985Z" level=info msg="Creating container: default/busybox/busybox" id=0011d388-4954-4a8f-b19a-fdf071fd18a2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:11 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:11.304727229Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:11 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:11.308564062Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:11 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:11.309061176Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:11 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:11.337674008Z" level=info msg="Created container 4bcbe6f31d599cd505ce05a1951e3a690207406abecfcacd810a60a717e11802: default/busybox/busybox" id=0011d388-4954-4a8f-b19a-fdf071fd18a2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:11 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:11.338369902Z" level=info msg="Starting container: 4bcbe6f31d599cd505ce05a1951e3a690207406abecfcacd810a60a717e11802" id=56c646e9-7a25-477d-99d2-30c4a837d5c1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:11:11 default-k8s-diff-port-361270 crio[773]: time="2025-12-13T09:11:11.340552071Z" level=info msg="Started container" PID=1929 containerID=4bcbe6f31d599cd505ce05a1951e3a690207406abecfcacd810a60a717e11802 description=default/busybox/busybox id=56c646e9-7a25-477d-99d2-30c4a837d5c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a5018c91e6836863f2122880a4f7b1092682f11eb9377323537424481f137e01
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	4bcbe6f31d599       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   a5018c91e6836       busybox                                                default
	ec961112c19b5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   775fda0ab5362       coredns-66bc5c9577-xhjmn                               kube-system
	34ac1a5f55c15       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   39ea4478d28f9       storage-provisioner                                    kube-system
	3ea43f576d131       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                      23 seconds ago      Running             kube-proxy                0                   8900e010a95bb       kube-proxy-78nr2                                       kube-system
	3fd4df2cf10ba       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   d57ce127c8423       kindnet-g6h8g                                          kube-system
	20540c05cc1e4       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                      34 seconds ago      Running             kube-scheduler            0                   e510ef6f5c4c1       kube-scheduler-default-k8s-diff-port-361270            kube-system
	e0133aeb7295e       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                      34 seconds ago      Running             kube-apiserver            0                   02ae204b2403e       kube-apiserver-default-k8s-diff-port-361270            kube-system
	6257908d38358       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                      34 seconds ago      Running             kube-controller-manager   0                   41c67925e5efd       kube-controller-manager-default-k8s-diff-port-361270   kube-system
	431054554e695       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      34 seconds ago      Running             etcd                      0                   1971d1cb784fb       etcd-default-k8s-diff-port-361270                      kube-system
	
	
	==> coredns [ec961112c19b55ad085ecff7af88a8d7c02cf82b5337f463a0ce6d0fd675bb0e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59159 - 63470 "HINFO IN 14962510611973450.6310739968989588231. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.014968215s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-361270
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-361270
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=default-k8s-diff-port-361270
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_10_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:10:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-361270
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:11:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:11:06 +0000   Sat, 13 Dec 2025 09:10:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:11:06 +0000   Sat, 13 Dec 2025 09:10:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:11:06 +0000   Sat, 13 Dec 2025 09:10:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:11:06 +0000   Sat, 13 Dec 2025 09:11:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-361270
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                2dd9bf5a-9012-41ec-b7a7-58f5e5034374
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-xhjmn                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-361270                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-g6h8g                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-361270             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-361270    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-78nr2                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-361270             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node default-k8s-diff-port-361270 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node default-k8s-diff-port-361270 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node default-k8s-diff-port-361270 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-361270 event: Registered Node default-k8s-diff-port-361270 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-361270 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[Dec13 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 92 30 31 19 d5 08 06
	[  +0.000699] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 37 88 97 25 dd 08 06
	[ +15.632521] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[ +20.216508] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 f7 62 38 81 f0 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[  +0.111008] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 c8 f7 8d 1a 5f 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[ +12.586108] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de cc 9f d9 2d 23 08 06
	[  +0.043792] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	[Dec13 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 62 87 e9 6b d3 37 08 06
	[  +0.000424] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	
	
	==> etcd [431054554e6950d6ed75a0896ec5f7e9da4ed37ac82fc9e5336d8c092ab7428a] <==
	{"level":"warn","ts":"2025-12-13T09:10:46.447571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.455824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.463304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.469618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.476386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.483868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.491188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.498345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.504847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.513646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.519799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.525973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.532297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.538686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.546913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.553366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.559647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.567276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.573647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.579945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.586294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.612742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.619708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.627336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:10:46.681924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42058","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:11:19 up 53 min,  0 user,  load average: 3.93, 3.52, 2.36
	Linux default-k8s-diff-port-361270 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3fd4df2cf10baa6a044cf6769ef51697fab1b2aca4b54bd0770d0d933d3cc578] <==
	I1213 09:10:55.790415       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 09:10:55.790761       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1213 09:10:55.790945       1 main.go:148] setting mtu 1500 for CNI 
	I1213 09:10:55.790972       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 09:10:55.790996       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T09:10:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 09:10:56.086423       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 09:10:56.086447       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 09:10:56.086470       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 09:10:56.087479       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 09:10:56.386829       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 09:10:56.386876       1 metrics.go:72] Registering metrics
	I1213 09:10:56.386943       1 controller.go:711] "Syncing nftables rules"
	I1213 09:11:06.089663       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 09:11:06.089721       1 main.go:301] handling current node
	I1213 09:11:16.089578       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 09:11:16.089630       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e0133aeb7295e7364c8e643b553475685d945e14cda063d8b4d400c6a75e4ba8] <==
	I1213 09:10:47.178046       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1213 09:10:47.179082       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:10:47.183457       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1213 09:10:47.184069       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:10:47.189693       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:10:47.189935       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 09:10:47.353751       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:10:48.061302       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1213 09:10:48.064733       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1213 09:10:48.064752       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 09:10:48.511242       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:10:48.545418       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:10:48.666188       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 09:10:48.672115       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1213 09:10:48.673126       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:10:48.676942       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:10:49.100870       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:10:49.824197       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:10:49.832743       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 09:10:49.840298       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 09:10:54.754899       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:10:54.758582       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:10:54.904521       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:10:55.152508       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1213 09:10:55.152519       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [6257908d38358e7c1b00b25981bc33b2f102a1a9dbf6b64f819eea383eaeb2c7] <==
	I1213 09:10:54.079408       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 09:10:54.099310       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 09:10:54.099354       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1213 09:10:54.100094       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1213 09:10:54.100117       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 09:10:54.100131       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1213 09:10:54.100282       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 09:10:54.100373       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1213 09:10:54.100463       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 09:10:54.100604       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1213 09:10:54.100763       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1213 09:10:54.100809       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 09:10:54.100841       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 09:10:54.101050       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1213 09:10:54.101067       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1213 09:10:54.101085       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 09:10:54.101181       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 09:10:54.101294       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-361270"
	I1213 09:10:54.101351       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1213 09:10:54.101472       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 09:10:54.104705       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 09:10:54.105593       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:10:54.110398       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 09:10:54.120874       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:11:09.104003       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3ea43f576d13175625d84a343dc2525f9401eb226f41ebefc79634284f1c4c9e] <==
	I1213 09:10:55.590373       1 server_linux.go:53] "Using iptables proxy"
	I1213 09:10:55.659703       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:10:55.760011       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:10:55.760054       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1213 09:10:55.760169       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:10:55.781438       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 09:10:55.781527       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:10:55.787848       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:10:55.788382       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:10:55.789574       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:10:55.791007       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:10:55.791031       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:10:55.791089       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:10:55.791094       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:10:55.791013       1 config.go:200] "Starting service config controller"
	I1213 09:10:55.791112       1 config.go:309] "Starting node config controller"
	I1213 09:10:55.791122       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:10:55.791128       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:10:55.791111       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:10:55.891140       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:10:55.891156       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:10:55.891199       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [20540c05cc1e4b3140d446543dc098c583a20233935d9d0a1e1cbe8e944d18db] <==
	E1213 09:10:47.108067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 09:10:47.108075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1213 09:10:47.108579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1213 09:10:47.109246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1213 09:10:47.109297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1213 09:10:47.109691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1213 09:10:47.109686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1213 09:10:47.109740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1213 09:10:47.109830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:10:47.109891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1213 09:10:47.109948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1213 09:10:47.109985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 09:10:47.109992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1213 09:10:47.110059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 09:10:47.110072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 09:10:47.975352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1213 09:10:47.980407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1213 09:10:47.999551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1213 09:10:48.039109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1213 09:10:48.122844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1213 09:10:48.204918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1213 09:10:48.277302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1213 09:10:48.297374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1213 09:10:48.350125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1213 09:10:51.205439       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 09:10:50 default-k8s-diff-port-361270 kubelet[1321]: E1213 09:10:50.688168    1321 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-361270\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-361270"
	Dec 13 09:10:50 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:10:50.712161    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-361270" podStartSLOduration=1.712139262 podStartE2EDuration="1.712139262s" podCreationTimestamp="2025-12-13 09:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:10:50.702920342 +0000 UTC m=+1.125353755" watchObservedRunningTime="2025-12-13 09:10:50.712139262 +0000 UTC m=+1.134572676"
	Dec 13 09:10:50 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:10:50.722413    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-361270" podStartSLOduration=1.722390683 podStartE2EDuration="1.722390683s" podCreationTimestamp="2025-12-13 09:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:10:50.722380226 +0000 UTC m=+1.144813635" watchObservedRunningTime="2025-12-13 09:10:50.722390683 +0000 UTC m=+1.144824095"
	Dec 13 09:10:50 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:10:50.722556    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-361270" podStartSLOduration=1.722546881 podStartE2EDuration="1.722546881s" podCreationTimestamp="2025-12-13 09:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:10:50.712910276 +0000 UTC m=+1.135343688" watchObservedRunningTime="2025-12-13 09:10:50.722546881 +0000 UTC m=+1.144980295"
	Dec 13 09:10:50 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:10:50.731170    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-361270" podStartSLOduration=1.731150162 podStartE2EDuration="1.731150162s" podCreationTimestamp="2025-12-13 09:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:10:50.731030204 +0000 UTC m=+1.153463615" watchObservedRunningTime="2025-12-13 09:10:50.731150162 +0000 UTC m=+1.153583577"
	Dec 13 09:10:54 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:10:54.079439    1321 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 13 09:10:54 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:10:54.080307    1321 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 13 09:10:55 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:10:55.188198    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/337a66f0-19f1-4351-bebf-7872d75ebf3e-lib-modules\") pod \"kube-proxy-78nr2\" (UID: \"337a66f0-19f1-4351-bebf-7872d75ebf3e\") " pod="kube-system/kube-proxy-78nr2"
	Dec 13 09:10:55 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:10:55.188262    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f054428-09d6-450c-befd-066cedc40ad4-xtables-lock\") pod \"kindnet-g6h8g\" (UID: \"7f054428-09d6-450c-befd-066cedc40ad4\") " pod="kube-system/kindnet-g6h8g"
	Dec 13 09:10:55 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:10:55.188316    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f054428-09d6-450c-befd-066cedc40ad4-lib-modules\") pod \"kindnet-g6h8g\" (UID: \"7f054428-09d6-450c-befd-066cedc40ad4\") " pod="kube-system/kindnet-g6h8g"
	Dec 13 09:10:55 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:10:55.188353    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkvtq\" (UniqueName: \"kubernetes.io/projected/7f054428-09d6-450c-befd-066cedc40ad4-kube-api-access-pkvtq\") pod \"kindnet-g6h8g\" (UID: \"7f054428-09d6-450c-befd-066cedc40ad4\") " pod="kube-system/kindnet-g6h8g"
	Dec 13 09:10:55 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:10:55.188382    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7f054428-09d6-450c-befd-066cedc40ad4-cni-cfg\") pod \"kindnet-g6h8g\" (UID: \"7f054428-09d6-450c-befd-066cedc40ad4\") " pod="kube-system/kindnet-g6h8g"
	Dec 13 09:10:55 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:10:55.188407    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/337a66f0-19f1-4351-bebf-7872d75ebf3e-kube-proxy\") pod \"kube-proxy-78nr2\" (UID: \"337a66f0-19f1-4351-bebf-7872d75ebf3e\") " pod="kube-system/kube-proxy-78nr2"
	Dec 13 09:10:55 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:10:55.188437    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/337a66f0-19f1-4351-bebf-7872d75ebf3e-xtables-lock\") pod \"kube-proxy-78nr2\" (UID: \"337a66f0-19f1-4351-bebf-7872d75ebf3e\") " pod="kube-system/kube-proxy-78nr2"
	Dec 13 09:10:55 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:10:55.188459    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24725\" (UniqueName: \"kubernetes.io/projected/337a66f0-19f1-4351-bebf-7872d75ebf3e-kube-api-access-24725\") pod \"kube-proxy-78nr2\" (UID: \"337a66f0-19f1-4351-bebf-7872d75ebf3e\") " pod="kube-system/kube-proxy-78nr2"
	Dec 13 09:10:55 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:10:55.701058    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-g6h8g" podStartSLOduration=0.701035037 podStartE2EDuration="701.035037ms" podCreationTimestamp="2025-12-13 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:10:55.700786431 +0000 UTC m=+6.123219844" watchObservedRunningTime="2025-12-13 09:10:55.701035037 +0000 UTC m=+6.123468449"
	Dec 13 09:10:57 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:10:57.486026    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-78nr2" podStartSLOduration=2.486003087 podStartE2EDuration="2.486003087s" podCreationTimestamp="2025-12-13 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:10:55.711465782 +0000 UTC m=+6.133899193" watchObservedRunningTime="2025-12-13 09:10:57.486003087 +0000 UTC m=+7.908436499"
	Dec 13 09:11:06 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:11:06.441774    1321 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 13 09:11:06 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:11:06.578151    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr6lh\" (UniqueName: \"kubernetes.io/projected/9268f47f-d59e-482a-82e1-d77d41735195-kube-api-access-xr6lh\") pod \"storage-provisioner\" (UID: \"9268f47f-d59e-482a-82e1-d77d41735195\") " pod="kube-system/storage-provisioner"
	Dec 13 09:11:06 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:11:06.578202    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbkrd\" (UniqueName: \"kubernetes.io/projected/3906f322-2c03-4d49-a6db-af27bf718d3d-kube-api-access-hbkrd\") pod \"coredns-66bc5c9577-xhjmn\" (UID: \"3906f322-2c03-4d49-a6db-af27bf718d3d\") " pod="kube-system/coredns-66bc5c9577-xhjmn"
	Dec 13 09:11:06 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:11:06.578293    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9268f47f-d59e-482a-82e1-d77d41735195-tmp\") pod \"storage-provisioner\" (UID: \"9268f47f-d59e-482a-82e1-d77d41735195\") " pod="kube-system/storage-provisioner"
	Dec 13 09:11:06 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:11:06.578354    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3906f322-2c03-4d49-a6db-af27bf718d3d-config-volume\") pod \"coredns-66bc5c9577-xhjmn\" (UID: \"3906f322-2c03-4d49-a6db-af27bf718d3d\") " pod="kube-system/coredns-66bc5c9577-xhjmn"
	Dec 13 09:11:07 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:11:07.731289    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xhjmn" podStartSLOduration=12.731265508 podStartE2EDuration="12.731265508s" podCreationTimestamp="2025-12-13 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:11:07.731082198 +0000 UTC m=+18.153515610" watchObservedRunningTime="2025-12-13 09:11:07.731265508 +0000 UTC m=+18.153698920"
	Dec 13 09:11:07 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:11:07.744254    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.744233044 podStartE2EDuration="12.744233044s" podCreationTimestamp="2025-12-13 09:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:11:07.743076685 +0000 UTC m=+18.165510099" watchObservedRunningTime="2025-12-13 09:11:07.744233044 +0000 UTC m=+18.166666456"
	Dec 13 09:11:09 default-k8s-diff-port-361270 kubelet[1321]: I1213 09:11:09.798872    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdz7n\" (UniqueName: \"kubernetes.io/projected/5d3bc9d7-ad91-4181-95e2-346452464325-kube-api-access-wdz7n\") pod \"busybox\" (UID: \"5d3bc9d7-ad91-4181-95e2-346452464325\") " pod="default/busybox"
	
	
	==> storage-provisioner [34ac1a5f55c154fa156ca83d1c013b2c7cf83ebfd8bd5ca7efdad76f151b56b1] <==
	I1213 09:11:06.839237       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:11:06.849893       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:11:06.849967       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 09:11:06.852753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:06.857539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:11:06.857768       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 09:11:06.857895       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"008c8b69-db9b-496b-ba4e-78cdc6236358", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-361270_3f3e9824-ee38-4bec-b9c9-2efcc95a262e became leader
	I1213 09:11:06.857970       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-361270_3f3e9824-ee38-4bec-b9c9-2efcc95a262e!
	W1213 09:11:06.860703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:06.864763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:11:06.958348       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-361270_3f3e9824-ee38-4bec-b9c9-2efcc95a262e!
	W1213 09:11:08.870656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:08.881338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:10.884834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:10.888923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:12.891795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:12.895187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:14.898023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:14.902454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:16.906989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:16.913396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:18.916817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:18.926682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-361270 -n default-k8s-diff-port-361270
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-361270 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-234538 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-234538 --alsologtostderr -v=1: exit status 80 (1.980696389s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-234538 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:11:20.725315  338219 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:20.725447  338219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:20.725455  338219 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:20.725463  338219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:20.726245  338219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:11:20.726696  338219 out.go:368] Setting JSON to false
	I1213 09:11:20.726720  338219 mustload.go:66] Loading cluster: old-k8s-version-234538
	I1213 09:11:20.727200  338219 config.go:182] Loaded profile config "old-k8s-version-234538": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1213 09:11:20.727733  338219 cli_runner.go:164] Run: docker container inspect old-k8s-version-234538 --format={{.State.Status}}
	I1213 09:11:20.760112  338219 host.go:66] Checking if "old-k8s-version-234538" exists ...
	I1213 09:11:20.760716  338219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:20.834521  338219 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-13 09:11:20.822435765 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:20.835438  338219 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-234538 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 09:11:20.837577  338219 out.go:179] * Pausing node old-k8s-version-234538 ... 
	I1213 09:11:20.838691  338219 host.go:66] Checking if "old-k8s-version-234538" exists ...
	I1213 09:11:20.838980  338219 ssh_runner.go:195] Run: systemctl --version
	I1213 09:11:20.839023  338219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-234538
	I1213 09:11:20.859769  338219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/old-k8s-version-234538/id_rsa Username:docker}
	I1213 09:11:20.953738  338219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:20.980542  338219 pause.go:52] kubelet running: true
	I1213 09:11:20.980598  338219 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:11:21.169777  338219 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:11:21.169904  338219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:11:21.247738  338219 cri.go:89] found id: "df4d7bf2845923fbd435913f93fe6d7454754cab6c72ec6ee2d93f963f342a80"
	I1213 09:11:21.247759  338219 cri.go:89] found id: "08a51937a4354c4bd30265f581946e42c57640c60d744db906265da68f2b4db2"
	I1213 09:11:21.247765  338219 cri.go:89] found id: "736464be00c6bd519200cf950d8d2522dd6f74b3dccbd17d0288278dc3d5bd05"
	I1213 09:11:21.247770  338219 cri.go:89] found id: "c1e919ad40225b12a51ff83b8b1ab06eb950ed45acc023817130b2bbb115503a"
	I1213 09:11:21.247774  338219 cri.go:89] found id: "6d4755f502135ef94a11ec27217a6459bc85937c42dc06e9a1e638df610779fb"
	I1213 09:11:21.247780  338219 cri.go:89] found id: "0cc4f5e85cb5d4e6d07eeb129540177624ba0b7b05e38e98203ef68cb53670db"
	I1213 09:11:21.247785  338219 cri.go:89] found id: "ccfc11a0ddb8317d8e1609f9778d0755dc87dac089178550d5aa53b7a0853424"
	I1213 09:11:21.247789  338219 cri.go:89] found id: "e2292eb60503a271d5b03a7e7a8cf528dea0e07edd89ce5c55a81bf4b0c2b310"
	I1213 09:11:21.247794  338219 cri.go:89] found id: "b6d10fbd863a8a81004a0b20dba55d8b74f364e15f804329d14979332876f75a"
	I1213 09:11:21.247802  338219 cri.go:89] found id: "a4fcaf3a6c74bd4a32be169b0f31cc448ee3af614c8b43b9b429453191a59f1b"
	I1213 09:11:21.247806  338219 cri.go:89] found id: "f28fd60687b20846a254002fb7cb4119ba9d01d31a94e6c8eec0afa665e5faa0"
	I1213 09:11:21.247811  338219 cri.go:89] found id: ""
	I1213 09:11:21.247853  338219 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:11:21.260834  338219 retry.go:31] will retry after 297.713676ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:21Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:11:21.559417  338219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:21.573601  338219 pause.go:52] kubelet running: false
	I1213 09:11:21.573683  338219 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:11:21.739577  338219 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:11:21.739653  338219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:11:21.816720  338219 cri.go:89] found id: "df4d7bf2845923fbd435913f93fe6d7454754cab6c72ec6ee2d93f963f342a80"
	I1213 09:11:21.816771  338219 cri.go:89] found id: "08a51937a4354c4bd30265f581946e42c57640c60d744db906265da68f2b4db2"
	I1213 09:11:21.816780  338219 cri.go:89] found id: "736464be00c6bd519200cf950d8d2522dd6f74b3dccbd17d0288278dc3d5bd05"
	I1213 09:11:21.816787  338219 cri.go:89] found id: "c1e919ad40225b12a51ff83b8b1ab06eb950ed45acc023817130b2bbb115503a"
	I1213 09:11:21.816793  338219 cri.go:89] found id: "6d4755f502135ef94a11ec27217a6459bc85937c42dc06e9a1e638df610779fb"
	I1213 09:11:21.816803  338219 cri.go:89] found id: "0cc4f5e85cb5d4e6d07eeb129540177624ba0b7b05e38e98203ef68cb53670db"
	I1213 09:11:21.816813  338219 cri.go:89] found id: "ccfc11a0ddb8317d8e1609f9778d0755dc87dac089178550d5aa53b7a0853424"
	I1213 09:11:21.816822  338219 cri.go:89] found id: "e2292eb60503a271d5b03a7e7a8cf528dea0e07edd89ce5c55a81bf4b0c2b310"
	I1213 09:11:21.816831  338219 cri.go:89] found id: "b6d10fbd863a8a81004a0b20dba55d8b74f364e15f804329d14979332876f75a"
	I1213 09:11:21.816870  338219 cri.go:89] found id: "a4fcaf3a6c74bd4a32be169b0f31cc448ee3af614c8b43b9b429453191a59f1b"
	I1213 09:11:21.816881  338219 cri.go:89] found id: "f28fd60687b20846a254002fb7cb4119ba9d01d31a94e6c8eec0afa665e5faa0"
	I1213 09:11:21.816887  338219 cri.go:89] found id: ""
	I1213 09:11:21.816934  338219 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:11:21.832167  338219 retry.go:31] will retry after 531.771512ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:21Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:11:22.364677  338219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:22.378150  338219 pause.go:52] kubelet running: false
	I1213 09:11:22.378209  338219 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:11:22.541574  338219 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:11:22.541651  338219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:11:22.610382  338219 cri.go:89] found id: "df4d7bf2845923fbd435913f93fe6d7454754cab6c72ec6ee2d93f963f342a80"
	I1213 09:11:22.610404  338219 cri.go:89] found id: "08a51937a4354c4bd30265f581946e42c57640c60d744db906265da68f2b4db2"
	I1213 09:11:22.610410  338219 cri.go:89] found id: "736464be00c6bd519200cf950d8d2522dd6f74b3dccbd17d0288278dc3d5bd05"
	I1213 09:11:22.610414  338219 cri.go:89] found id: "c1e919ad40225b12a51ff83b8b1ab06eb950ed45acc023817130b2bbb115503a"
	I1213 09:11:22.610419  338219 cri.go:89] found id: "6d4755f502135ef94a11ec27217a6459bc85937c42dc06e9a1e638df610779fb"
	I1213 09:11:22.610423  338219 cri.go:89] found id: "0cc4f5e85cb5d4e6d07eeb129540177624ba0b7b05e38e98203ef68cb53670db"
	I1213 09:11:22.610427  338219 cri.go:89] found id: "ccfc11a0ddb8317d8e1609f9778d0755dc87dac089178550d5aa53b7a0853424"
	I1213 09:11:22.610431  338219 cri.go:89] found id: "e2292eb60503a271d5b03a7e7a8cf528dea0e07edd89ce5c55a81bf4b0c2b310"
	I1213 09:11:22.610435  338219 cri.go:89] found id: "b6d10fbd863a8a81004a0b20dba55d8b74f364e15f804329d14979332876f75a"
	I1213 09:11:22.610450  338219 cri.go:89] found id: "a4fcaf3a6c74bd4a32be169b0f31cc448ee3af614c8b43b9b429453191a59f1b"
	I1213 09:11:22.610455  338219 cri.go:89] found id: "f28fd60687b20846a254002fb7cb4119ba9d01d31a94e6c8eec0afa665e5faa0"
	I1213 09:11:22.610459  338219 cri.go:89] found id: ""
	I1213 09:11:22.610515  338219 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:11:22.626622  338219 out.go:203] 
	W1213 09:11:22.627903  338219 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 09:11:22.627917  338219 out.go:285] * 
	* 
	W1213 09:11:22.632725  338219 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:11:22.634285  338219 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-234538 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-234538
helpers_test.go:244: (dbg) docker inspect old-k8s-version-234538:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e",
	        "Created": "2025-12-13T09:09:04.827842959Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 324903,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:10:17.20091342Z",
	            "FinishedAt": "2025-12-13T09:10:16.209321608Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e/hosts",
	        "LogPath": "/var/lib/docker/containers/9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e/9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e-json.log",
	        "Name": "/old-k8s-version-234538",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-234538:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-234538",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e",
	                "LowerDir": "/var/lib/docker/overlay2/3ff0536271632f931d6d08f0fc2e635f1db6acd2a26a40bb7a01b3d549888fae-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ff0536271632f931d6d08f0fc2e635f1db6acd2a26a40bb7a01b3d549888fae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ff0536271632f931d6d08f0fc2e635f1db6acd2a26a40bb7a01b3d549888fae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ff0536271632f931d6d08f0fc2e635f1db6acd2a26a40bb7a01b3d549888fae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-234538",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-234538/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-234538",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-234538",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-234538",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "01bb6e6fa13e1364cd3b276174b95fdc3ffc32c99eb3ae3bb02ee98b4ef570c4",
	            "SandboxKey": "/var/run/docker/netns/01bb6e6fa13e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-234538": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03cd5b8c21bed175419d3254147f83d57d2a9fa170523cc5fcd50bb748af5603",
	                    "EndpointID": "0b69255072d972c007882ec64d0bb2fbdf6285823076eea000d7681f1a1ec0be",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "fe:88:91:21:4d:45",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-234538",
	                        "9956457b660b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234538 -n old-k8s-version-234538
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234538 -n old-k8s-version-234538: exit status 2 (353.813488ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-234538 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-234538 logs -n 25: (1.127155929s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-833990 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ ssh     │ -p bridge-833990 sudo crio config                                                                                                                                                                                                             │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ delete  │ -p bridge-833990                                                                                                                                                                                                                              │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-291522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-234538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ stop    │ -p no-preload-291522 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:10 UTC │
	│ stop    │ -p old-k8s-version-234538 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ addons  │ enable dashboard -p no-preload-291522 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p no-preload-291522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-234538 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p old-k8s-version-234538 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p kubernetes-upgrade-814560                                                                                                                                                                                                                  │ kubernetes-upgrade-814560    │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ delete  │ -p disable-driver-mounts-779931                                                                                                                                                                                                               │ disable-driver-mounts-779931 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable metrics-server -p embed-certs-379362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │                     │
	│ stop    │ -p embed-certs-379362 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p embed-certs-379362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ image   │ no-preload-291522 image list --format=json                                                                                                                                                                                                    │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p no-preload-291522 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-361270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ image   │ old-k8s-version-234538 image list --format=json                                                                                                                                                                                               │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ stop    │ -p default-k8s-diff-port-361270 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ pause   │ -p old-k8s-version-234538 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:11:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:11:01.859652  333890 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:01.859763  333890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:01.859768  333890 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:01.859780  333890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:01.860007  333890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:11:01.860461  333890 out.go:368] Setting JSON to false
	I1213 09:11:01.861836  333890 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3214,"bootTime":1765613848,"procs":357,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:11:01.861905  333890 start.go:143] virtualization: kvm guest
	I1213 09:11:01.863731  333890 out.go:179] * [embed-certs-379362] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:11:01.865249  333890 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:11:01.865281  333890 notify.go:221] Checking for updates...
	I1213 09:11:01.867359  333890 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:11:01.868519  333890 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:01.869842  333890 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:11:01.871012  333890 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:11:01.872143  333890 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:11:01.873683  333890 config.go:182] Loaded profile config "embed-certs-379362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:01.874233  333890 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:11:01.901548  333890 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 09:11:01.901656  333890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:01.959403  333890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 09:11:01.949301411 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:01.959565  333890 docker.go:319] overlay module found
	I1213 09:11:01.961826  333890 out.go:179] * Using the docker driver based on existing profile
	W1213 09:10:57.872528  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	W1213 09:10:59.873309  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	I1213 09:11:01.962862  333890 start.go:309] selected driver: docker
	I1213 09:11:01.962874  333890 start.go:927] validating driver "docker" against &{Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:01.962966  333890 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:11:01.963566  333890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:02.021259  333890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 09:11:02.010959916 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:02.021565  333890 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:11:02.021623  333890 cni.go:84] Creating CNI manager for ""
	I1213 09:11:02.021676  333890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:02.021713  333890 start.go:353] cluster config:
	{Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:02.023438  333890 out.go:179] * Starting "embed-certs-379362" primary control-plane node in "embed-certs-379362" cluster
	I1213 09:11:02.024571  333890 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 09:11:02.025856  333890 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:11:02.026959  333890 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:11:02.026992  333890 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 09:11:02.027007  333890 cache.go:65] Caching tarball of preloaded images
	I1213 09:11:02.027033  333890 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:11:02.027086  333890 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:11:02.027100  333890 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 09:11:02.027214  333890 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/config.json ...
	I1213 09:11:02.048858  333890 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:11:02.048877  333890 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:11:02.048892  333890 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:11:02.048922  333890 start.go:360] acquireMachinesLock for embed-certs-379362: {Name:mk2ae32cc4beadbba6a2e4810e36036ee6a949ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:11:02.048994  333890 start.go:364] duration metric: took 42.67µs to acquireMachinesLock for "embed-certs-379362"
	I1213 09:11:02.049011  333890 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:11:02.049016  333890 fix.go:54] fixHost starting: 
	I1213 09:11:02.049233  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:02.068302  333890 fix.go:112] recreateIfNeeded on embed-certs-379362: state=Stopped err=<nil>
	W1213 09:11:02.068327  333890 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 09:10:59.583124  328914 node_ready.go:57] node "default-k8s-diff-port-361270" has "Ready":"False" status (will retry)
	W1213 09:11:02.082475  328914 node_ready.go:57] node "default-k8s-diff-port-361270" has "Ready":"False" status (will retry)
	W1213 09:11:02.629196  323665 pod_ready.go:104] pod "coredns-7d764666f9-r95cr" is not "Ready", error: <nil>
	I1213 09:11:03.625367  323665 pod_ready.go:94] pod "coredns-7d764666f9-r95cr" is "Ready"
	I1213 09:11:03.625394  323665 pod_ready.go:86] duration metric: took 37.505010805s for pod "coredns-7d764666f9-r95cr" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.628034  323665 pod_ready.go:83] waiting for pod "etcd-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.631736  323665 pod_ready.go:94] pod "etcd-no-preload-291522" is "Ready"
	I1213 09:11:03.631760  323665 pod_ready.go:86] duration metric: took 3.705789ms for pod "etcd-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.633687  323665 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.637223  323665 pod_ready.go:94] pod "kube-apiserver-no-preload-291522" is "Ready"
	I1213 09:11:03.637246  323665 pod_ready.go:86] duration metric: took 3.541562ms for pod "kube-apiserver-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.638918  323665 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.823946  323665 pod_ready.go:94] pod "kube-controller-manager-no-preload-291522" is "Ready"
	I1213 09:11:03.823973  323665 pod_ready.go:86] duration metric: took 185.03756ms for pod "kube-controller-manager-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:04.024005  323665 pod_ready.go:83] waiting for pod "kube-proxy-ktgbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:04.424202  323665 pod_ready.go:94] pod "kube-proxy-ktgbz" is "Ready"
	I1213 09:11:04.424226  323665 pod_ready.go:86] duration metric: took 400.196554ms for pod "kube-proxy-ktgbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:04.624268  323665 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:05.023621  323665 pod_ready.go:94] pod "kube-scheduler-no-preload-291522" is "Ready"
	I1213 09:11:05.023647  323665 pod_ready.go:86] duration metric: took 399.354065ms for pod "kube-scheduler-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:05.023659  323665 pod_ready.go:40] duration metric: took 38.976009117s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:05.066541  323665 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 09:11:05.068302  323665 out.go:179] * Done! kubectl is now configured to use "no-preload-291522" cluster and "default" namespace by default
	I1213 09:11:02.070162  333890 out.go:252] * Restarting existing docker container for "embed-certs-379362" ...
	I1213 09:11:02.070221  333890 cli_runner.go:164] Run: docker start embed-certs-379362
	I1213 09:11:02.321118  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:02.339633  333890 kic.go:430] container "embed-certs-379362" state is running.
	I1213 09:11:02.340097  333890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-379362
	I1213 09:11:02.359827  333890 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/config.json ...
	I1213 09:11:02.360100  333890 machine.go:94] provisionDockerMachine start ...
	I1213 09:11:02.360192  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:02.380390  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:02.380635  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:02.380649  333890 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:11:02.381372  333890 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45890->127.0.0.1:33123: read: connection reset by peer
	I1213 09:11:05.518562  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-379362
	
	I1213 09:11:05.518593  333890 ubuntu.go:182] provisioning hostname "embed-certs-379362"
	I1213 09:11:05.518644  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:05.537736  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:05.538011  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:05.538026  333890 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-379362 && echo "embed-certs-379362" | sudo tee /etc/hostname
	I1213 09:11:05.683114  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-379362
	
	I1213 09:11:05.683217  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:05.702249  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:05.702628  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:05.702658  333890 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-379362' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-379362/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-379362' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:11:05.839172  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:11:05.839203  333890 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-5776/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-5776/.minikube}
	I1213 09:11:05.839221  333890 ubuntu.go:190] setting up certificates
	I1213 09:11:05.839232  333890 provision.go:84] configureAuth start
	I1213 09:11:05.839277  333890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-379362
	I1213 09:11:05.857894  333890 provision.go:143] copyHostCerts
	I1213 09:11:05.857989  333890 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem, removing ...
	I1213 09:11:05.858008  333890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem
	I1213 09:11:05.858077  333890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem (1675 bytes)
	I1213 09:11:05.858209  333890 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem, removing ...
	I1213 09:11:05.858219  333890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem
	I1213 09:11:05.858255  333890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem (1082 bytes)
	I1213 09:11:05.858308  333890 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem, removing ...
	I1213 09:11:05.858315  333890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem
	I1213 09:11:05.858338  333890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem (1123 bytes)
	I1213 09:11:05.858384  333890 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem org=jenkins.embed-certs-379362 san=[127.0.0.1 192.168.85.2 embed-certs-379362 localhost minikube]
	I1213 09:11:05.995748  333890 provision.go:177] copyRemoteCerts
	I1213 09:11:05.995808  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:11:05.995841  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.014933  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.113890  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:11:06.131828  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1213 09:11:06.149744  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:11:06.167004  333890 provision.go:87] duration metric: took 327.760831ms to configureAuth
	I1213 09:11:06.167034  333890 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:11:06.167248  333890 config.go:182] Loaded profile config "embed-certs-379362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:06.167371  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.186434  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:06.186700  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:06.186718  333890 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 09:11:06.519456  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 09:11:06.519500  333890 machine.go:97] duration metric: took 4.159363834s to provisionDockerMachine
	I1213 09:11:06.519515  333890 start.go:293] postStartSetup for "embed-certs-379362" (driver="docker")
	I1213 09:11:06.519528  333890 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:11:06.519593  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:11:06.519656  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.538380  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.634842  333890 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:11:06.638452  333890 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:11:06.638473  333890 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:11:06.638495  333890 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/addons for local assets ...
	I1213 09:11:06.638554  333890 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/files for local assets ...
	I1213 09:11:06.638653  333890 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem -> 93032.pem in /etc/ssl/certs
	I1213 09:11:06.638763  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:11:06.646671  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:11:06.664174  333890 start.go:296] duration metric: took 144.644973ms for postStartSetup
	I1213 09:11:06.664268  333890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:11:06.664305  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.683615  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.779502  333890 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:11:06.785404  333890 fix.go:56] duration metric: took 4.736380482s for fixHost
	I1213 09:11:06.785434  333890 start.go:83] releasing machines lock for "embed-certs-379362", held for 4.736428362s
	I1213 09:11:06.785524  333890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-379362
	I1213 09:11:06.808003  333890 ssh_runner.go:195] Run: cat /version.json
	I1213 09:11:06.808061  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.808078  333890 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:11:06.808172  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.833412  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.833605  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	W1213 09:11:02.373908  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	W1213 09:11:04.872547  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	W1213 09:11:06.873449  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	I1213 09:11:06.984735  333890 ssh_runner.go:195] Run: systemctl --version
	I1213 09:11:06.991583  333890 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 09:11:07.026938  333890 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:11:07.031772  333890 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:11:07.031840  333890 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:11:07.039992  333890 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 09:11:07.040013  333890 start.go:496] detecting cgroup driver to use...
	I1213 09:11:07.040046  333890 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 09:11:07.040090  333890 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:11:07.054785  333890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:11:07.068014  333890 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:11:07.068059  333890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:11:07.083003  333890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:11:07.096366  333890 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:11:07.183847  333890 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:11:07.269721  333890 docker.go:234] disabling docker service ...
	I1213 09:11:07.269771  333890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:11:07.285161  333890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:11:07.297389  333890 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:11:07.384882  333890 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:11:07.467142  333890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:11:07.481367  333890 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:11:07.495794  333890 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 09:11:07.495842  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.505016  333890 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 09:11:07.505072  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.514873  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.523864  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.532764  333890 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:11:07.541036  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.549898  333890 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.558670  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.568189  333890 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:11:07.575855  333890 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:11:07.582903  333890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:07.670568  333890 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 09:11:07.843644  333890 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 09:11:07.843715  333890 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 09:11:07.848433  333890 start.go:564] Will wait 60s for crictl version
	I1213 09:11:07.848528  333890 ssh_runner.go:195] Run: which crictl
	I1213 09:11:07.852256  333890 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:11:07.876837  333890 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 09:11:07.876932  333890 ssh_runner.go:195] Run: crio --version
	I1213 09:11:07.904955  333890 ssh_runner.go:195] Run: crio --version
	I1213 09:11:07.933896  333890 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1213 09:11:04.083292  328914 node_ready.go:57] node "default-k8s-diff-port-361270" has "Ready":"False" status (will retry)
	I1213 09:11:06.583127  328914 node_ready.go:49] node "default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:06.583165  328914 node_ready.go:38] duration metric: took 11.003480314s for node "default-k8s-diff-port-361270" to be "Ready" ...
	I1213 09:11:06.583181  328914 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:11:06.583231  328914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:11:06.594500  328914 api_server.go:72] duration metric: took 11.299110433s to wait for apiserver process to appear ...
	I1213 09:11:06.594525  328914 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:11:06.594541  328914 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1213 09:11:06.599417  328914 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1213 09:11:06.600336  328914 api_server.go:141] control plane version: v1.34.2
	I1213 09:11:06.600358  328914 api_server.go:131] duration metric: took 5.826824ms to wait for apiserver health ...
	I1213 09:11:06.600365  328914 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:11:06.603252  328914 system_pods.go:59] 8 kube-system pods found
	I1213 09:11:06.603278  328914 system_pods.go:61] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:06.603283  328914 system_pods.go:61] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:06.603289  328914 system_pods.go:61] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:06.603292  328914 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:06.603296  328914 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:06.603302  328914 system_pods.go:61] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:06.603305  328914 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:06.603310  328914 system_pods.go:61] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:06.603316  328914 system_pods.go:74] duration metric: took 2.9457ms to wait for pod list to return data ...
	I1213 09:11:06.603325  328914 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:11:06.605317  328914 default_sa.go:45] found service account: "default"
	I1213 09:11:06.605334  328914 default_sa.go:55] duration metric: took 2.001953ms for default service account to be created ...
	I1213 09:11:06.605341  328914 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 09:11:06.607611  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:06.607633  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:06.607645  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:06.607651  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:06.607654  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:06.607658  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:06.607662  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:06.607665  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:06.607669  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:06.607685  328914 retry.go:31] will retry after 272.651119ms: missing components: kube-dns
	I1213 09:11:06.885001  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:06.885038  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:06.885046  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:06.885055  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:06.885061  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:06.885067  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:06.885073  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:06.885078  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:06.885087  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:06.885109  328914 retry.go:31] will retry after 389.523569ms: missing components: kube-dns
	I1213 09:11:07.279258  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:07.279287  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:07.279293  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:07.279298  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:07.279302  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:07.279305  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:07.279308  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:07.279317  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:07.279322  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:07.279335  328914 retry.go:31] will retry after 448.006807ms: missing components: kube-dns
	I1213 09:11:07.732933  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:07.732978  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:07.732988  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:07.732997  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:07.733002  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:07.733008  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:07.733012  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:07.733016  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:07.733020  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:07.733031  328914 system_pods.go:126] duration metric: took 1.127684936s to wait for k8s-apps to be running ...
	I1213 09:11:07.733038  328914 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 09:11:07.733082  328914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:07.749643  328914 system_svc.go:56] duration metric: took 16.594824ms WaitForService to wait for kubelet
	I1213 09:11:07.749674  328914 kubeadm.go:587] duration metric: took 12.454300158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:11:07.749698  328914 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:11:07.752080  328914 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:11:07.752112  328914 node_conditions.go:123] node cpu capacity is 8
	I1213 09:11:07.752131  328914 node_conditions.go:105] duration metric: took 2.42792ms to run NodePressure ...
	I1213 09:11:07.752146  328914 start.go:242] waiting for startup goroutines ...
	I1213 09:11:07.752160  328914 start.go:247] waiting for cluster config update ...
	I1213 09:11:07.752173  328914 start.go:256] writing updated cluster config ...
	I1213 09:11:07.752508  328914 ssh_runner.go:195] Run: rm -f paused
	I1213 09:11:07.757523  328914 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:07.761238  328914 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xhjmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.766432  328914 pod_ready.go:94] pod "coredns-66bc5c9577-xhjmn" is "Ready"
	I1213 09:11:07.766458  328914 pod_ready.go:86] duration metric: took 5.192246ms for pod "coredns-66bc5c9577-xhjmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.832062  328914 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.840179  328914 pod_ready.go:94] pod "etcd-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:07.840203  328914 pod_ready.go:86] duration metric: took 8.11705ms for pod "etcd-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.842550  328914 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.846547  328914 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:07.846570  328914 pod_ready.go:86] duration metric: took 3.999501ms for pod "kube-apiserver-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.848547  328914 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.161326  328914 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:08.161349  328914 pod_ready.go:86] duration metric: took 312.780385ms for pod "kube-controller-manager-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.372943  324697 pod_ready.go:94] pod "coredns-5dd5756b68-g66tb" is "Ready"
	I1213 09:11:07.372967  324697 pod_ready.go:86] duration metric: took 39.505999616s for pod "coredns-5dd5756b68-g66tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.375663  324697 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.379892  324697 pod_ready.go:94] pod "etcd-old-k8s-version-234538" is "Ready"
	I1213 09:11:07.379916  324697 pod_ready.go:86] duration metric: took 4.234738ms for pod "etcd-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.382722  324697 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.386579  324697 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-234538" is "Ready"
	I1213 09:11:07.386602  324697 pod_ready.go:86] duration metric: took 3.859665ms for pod "kube-apiserver-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.388935  324697 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.570936  324697 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-234538" is "Ready"
	I1213 09:11:07.570963  324697 pod_ready.go:86] duration metric: took 182.006223ms for pod "kube-controller-manager-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.772324  324697 pod_ready.go:83] waiting for pod "kube-proxy-6bkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.173608  324697 pod_ready.go:94] pod "kube-proxy-6bkvj" is "Ready"
	I1213 09:11:08.173638  324697 pod_ready.go:86] duration metric: took 401.292694ms for pod "kube-proxy-6bkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.372409  324697 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.772063  324697 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-234538" is "Ready"
	I1213 09:11:08.772095  324697 pod_ready.go:86] duration metric: took 399.659792ms for pod "kube-scheduler-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.772110  324697 pod_ready.go:40] duration metric: took 40.909481149s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:08.832194  324697 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1213 09:11:08.834797  324697 out.go:203] 
	W1213 09:11:08.836008  324697 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1213 09:11:08.837190  324697 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1213 09:11:08.838445  324697 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-234538" cluster and "default" namespace by default
	I1213 09:11:07.935243  333890 cli_runner.go:164] Run: docker network inspect embed-certs-379362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:11:07.953455  333890 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 09:11:07.957554  333890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:11:07.968284  333890 kubeadm.go:884] updating cluster {Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:11:07.968419  333890 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:11:07.968476  333890 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:11:08.002674  333890 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:11:08.002700  333890 crio.go:433] Images already preloaded, skipping extraction
	I1213 09:11:08.002756  333890 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:11:08.028193  333890 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:11:08.028216  333890 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:11:08.028225  333890 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1213 09:11:08.028332  333890 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-379362 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:11:08.028403  333890 ssh_runner.go:195] Run: crio config
	I1213 09:11:08.074930  333890 cni.go:84] Creating CNI manager for ""
	I1213 09:11:08.074949  333890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:08.074961  333890 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:11:08.074981  333890 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-379362 NodeName:embed-certs-379362 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:11:08.075100  333890 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-379362"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:11:08.075176  333890 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 09:11:08.083542  333890 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:11:08.083624  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:11:08.091566  333890 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1213 09:11:08.104461  333890 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 09:11:08.117321  333890 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1213 09:11:08.130224  333890 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:11:08.134005  333890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:11:08.144074  333890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:08.224481  333890 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:11:08.245774  333890 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362 for IP: 192.168.85.2
	I1213 09:11:08.245792  333890 certs.go:195] generating shared ca certs ...
	I1213 09:11:08.245810  333890 certs.go:227] acquiring lock for ca certs: {Name:mk80892d6e61f0acf7b97550b52b3341a040901c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:08.245989  333890 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key
	I1213 09:11:08.246048  333890 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key
	I1213 09:11:08.246059  333890 certs.go:257] generating profile certs ...
	I1213 09:11:08.246147  333890 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/client.key
	I1213 09:11:08.246205  333890 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/apiserver.key.814e7b8a
	I1213 09:11:08.246246  333890 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/proxy-client.key
	I1213 09:11:08.246349  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem (1338 bytes)
	W1213 09:11:08.246386  333890 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303_empty.pem, impossibly tiny 0 bytes
	I1213 09:11:08.246398  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:11:08.246422  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:11:08.246445  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:11:08.246474  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem (1675 bytes)
	I1213 09:11:08.246555  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:11:08.247224  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:11:08.265750  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:11:08.284698  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:11:08.304326  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:11:08.329185  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1213 09:11:08.348060  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 09:11:08.365610  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:11:08.383456  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 09:11:08.400955  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /usr/share/ca-certificates/93032.pem (1708 bytes)
	I1213 09:11:08.418539  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:11:08.436393  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem --> /usr/share/ca-certificates/9303.pem (1338 bytes)
	I1213 09:11:08.454266  333890 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:11:08.466744  333890 ssh_runner.go:195] Run: openssl version
	I1213 09:11:08.473100  333890 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.480536  333890 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/93032.pem /etc/ssl/certs/93032.pem
	I1213 09:11:08.488383  333890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.492189  333890 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:37 /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.492239  333890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.529232  333890 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:11:08.537596  333890 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.545251  333890 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:11:08.552715  333890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.556579  333890 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.556629  333890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.600524  333890 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:11:08.608451  333890 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.616267  333890 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9303.pem /etc/ssl/certs/9303.pem
	I1213 09:11:08.624437  333890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.628633  333890 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:37 /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.628687  333890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.663783  333890 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:11:08.672093  333890 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:11:08.676012  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:11:08.714649  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:11:08.753817  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:11:08.802703  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:11:08.851736  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:11:08.921259  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:11:08.977170  333890 kubeadm.go:401] StartCluster: {Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:08.977291  333890 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 09:11:08.977362  333890 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:11:09.015784  333890 cri.go:89] found id: "4bc6623c8d51e745a13ec1bbde3156fa4a6306b57cced07bc50b9433f54b52ab"
	I1213 09:11:09.015811  333890 cri.go:89] found id: "be5f00248e70cd8cdd3aaa3d5a1222e8bf8bbfab76393d6a5892e2e4c34a2a74"
	I1213 09:11:09.015818  333890 cri.go:89] found id: "9f6e183787c3b40e4c300978c57f6aef4eb0fabeae2452bf40c81a0b7a5f096a"
	I1213 09:11:09.015825  333890 cri.go:89] found id: "4aa683e93939933e0c046128e063e112508837dfd7e3b3f413f70d5bccf4c6da"
	I1213 09:11:09.015829  333890 cri.go:89] found id: ""
	I1213 09:11:09.015875  333890 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 09:11:09.030638  333890 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:09Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:11:09.030704  333890 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:11:09.039128  333890 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 09:11:09.039178  333890 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 09:11:09.039248  333890 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 09:11:09.047141  333890 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:11:09.048055  333890 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-379362" does not appear in /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:09.048563  333890 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-5776/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-379362" cluster setting kubeconfig missing "embed-certs-379362" context setting]
	I1213 09:11:09.049221  333890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:09.050957  333890 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 09:11:09.059934  333890 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 09:11:09.059966  333890 kubeadm.go:602] duration metric: took 20.780797ms to restartPrimaryControlPlane
	I1213 09:11:09.059975  333890 kubeadm.go:403] duration metric: took 82.814517ms to StartCluster
	I1213 09:11:09.059992  333890 settings.go:142] acquiring lock: {Name:mkb7db33d439ccf76620dab7051026432361ebb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:09.060056  333890 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:09.062377  333890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:09.062685  333890 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:11:09.062757  333890 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 09:11:09.062848  333890 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-379362"
	I1213 09:11:09.062864  333890 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-379362"
	W1213 09:11:09.062872  333890 addons.go:248] addon storage-provisioner should already be in state true
	I1213 09:11:09.062901  333890 host.go:66] Checking if "embed-certs-379362" exists ...
	I1213 09:11:09.062909  333890 addons.go:70] Setting dashboard=true in profile "embed-certs-379362"
	I1213 09:11:09.062926  333890 addons.go:239] Setting addon dashboard=true in "embed-certs-379362"
	W1213 09:11:09.062935  333890 addons.go:248] addon dashboard should already be in state true
	I1213 09:11:09.062946  333890 config.go:182] Loaded profile config "embed-certs-379362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:09.062959  333890 host.go:66] Checking if "embed-certs-379362" exists ...
	I1213 09:11:09.062995  333890 addons.go:70] Setting default-storageclass=true in profile "embed-certs-379362"
	I1213 09:11:09.063010  333890 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-379362"
	I1213 09:11:09.063289  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.063415  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.063500  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.067611  333890 out.go:179] * Verifying Kubernetes components...
	I1213 09:11:09.069241  333890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:09.089368  333890 addons.go:239] Setting addon default-storageclass=true in "embed-certs-379362"
	W1213 09:11:09.089396  333890 addons.go:248] addon default-storageclass should already be in state true
	I1213 09:11:09.089421  333890 host.go:66] Checking if "embed-certs-379362" exists ...
	I1213 09:11:09.089959  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.091596  333890 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 09:11:09.091621  333890 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:11:09.094004  333890 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:11:09.094022  333890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 09:11:09.094036  333890 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 09:11:08.362204  328914 pod_ready.go:83] waiting for pod "kube-proxy-78nr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.762127  328914 pod_ready.go:94] pod "kube-proxy-78nr2" is "Ready"
	I1213 09:11:08.762159  328914 pod_ready.go:86] duration metric: took 399.931988ms for pod "kube-proxy-78nr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.963595  328914 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:09.362581  328914 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:09.362857  328914 pod_ready.go:86] duration metric: took 399.227137ms for pod "kube-scheduler-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:09.362881  328914 pod_ready.go:40] duration metric: took 1.60532416s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:09.427945  328914 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 09:11:09.429725  328914 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-361270" cluster and "default" namespace by default
	I1213 09:11:09.094083  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:09.094976  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 09:11:09.094990  333890 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 09:11:09.095048  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:09.122479  333890 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 09:11:09.122516  333890 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 09:11:09.122573  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:09.124934  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:09.126649  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:09.157673  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:09.240152  333890 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:11:09.256813  333890 node_ready.go:35] waiting up to 6m0s for node "embed-certs-379362" to be "Ready" ...
	I1213 09:11:09.266223  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 09:11:09.266249  333890 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 09:11:09.266409  333890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:11:09.280359  333890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 09:11:09.282762  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 09:11:09.282784  333890 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 09:11:09.306961  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 09:11:09.307019  333890 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 09:11:09.323015  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 09:11:09.323036  333890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 09:11:09.339143  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 09:11:09.339166  333890 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 09:11:09.367621  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 09:11:09.367646  333890 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 09:11:09.382705  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 09:11:09.382728  333890 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 09:11:09.398185  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 09:11:09.398219  333890 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 09:11:09.414356  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:11:09.414389  333890 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 09:11:09.430652  333890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:11:10.622141  333890 node_ready.go:49] node "embed-certs-379362" is "Ready"
	I1213 09:11:10.622177  333890 node_ready.go:38] duration metric: took 1.365330808s for node "embed-certs-379362" to be "Ready" ...
	I1213 09:11:10.622194  333890 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:11:10.622248  333890 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:11:11.141921  333890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.875483061s)
	I1213 09:11:11.141933  333890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.861538443s)
	I1213 09:11:11.142098  333890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.711411401s)
	I1213 09:11:11.142138  333890 api_server.go:72] duration metric: took 2.079421919s to wait for apiserver process to appear ...
	I1213 09:11:11.142151  333890 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:11:11.142170  333890 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 09:11:11.143945  333890 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-379362 addons enable metrics-server
	
	I1213 09:11:11.149734  333890 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:11:11.149761  333890 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:11:11.155576  333890 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 09:11:11.156748  333890 addons.go:530] duration metric: took 2.094000513s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1213 09:11:11.642554  333890 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 09:11:11.648040  333890 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:11:11.648073  333890 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:11:12.142953  333890 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 09:11:12.147533  333890 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1213 09:11:12.148602  333890 api_server.go:141] control plane version: v1.34.2
	I1213 09:11:12.148630  333890 api_server.go:131] duration metric: took 1.006470603s to wait for apiserver health ...
	I1213 09:11:12.148643  333890 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:11:12.152383  333890 system_pods.go:59] 8 kube-system pods found
	I1213 09:11:12.152411  333890 system_pods.go:61] "coredns-66bc5c9577-24vtj" [8986d496-b2cb-429d-80ec-2f326920e440] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:12.152418  333890 system_pods.go:61] "etcd-embed-certs-379362" [cfdea667-b08a-4d24-b7f4-0fe21dbc5388] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:11:12.152428  333890 system_pods.go:61] "kindnet-4vk4d" [23fa27ce-887f-4910-af8d-74b11ea2df32] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 09:11:12.152449  333890 system_pods.go:61] "kube-apiserver-embed-certs-379362" [24a409bb-590d-4ac2-9246-7dba3fc3f946] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:11:12.152462  333890 system_pods.go:61] "kube-controller-manager-embed-certs-379362" [77968fd1-b384-4df9-86bd-289d910ba778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:11:12.152469  333890 system_pods.go:61] "kube-proxy-zmtpb" [c6bfb114-7843-46f4-8244-db73b00b7e6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 09:11:12.152495  333890 system_pods.go:61] "kube-scheduler-embed-certs-379362" [eb180ea3-0cfe-44f4-a995-7612e63240ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:11:12.152526  333890 system_pods.go:61] "storage-provisioner" [937cc208-1949-4660-a328-292224786f1b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:12.152535  333890 system_pods.go:74] duration metric: took 3.881548ms to wait for pod list to return data ...
	I1213 09:11:12.152549  333890 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:11:12.155530  333890 default_sa.go:45] found service account: "default"
	I1213 09:11:12.155557  333890 default_sa.go:55] duration metric: took 3.001063ms for default service account to be created ...
	I1213 09:11:12.155568  333890 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 09:11:12.158432  333890 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:12.158455  333890 system_pods.go:89] "coredns-66bc5c9577-24vtj" [8986d496-b2cb-429d-80ec-2f326920e440] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:12.158463  333890 system_pods.go:89] "etcd-embed-certs-379362" [cfdea667-b08a-4d24-b7f4-0fe21dbc5388] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:11:12.158470  333890 system_pods.go:89] "kindnet-4vk4d" [23fa27ce-887f-4910-af8d-74b11ea2df32] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 09:11:12.158476  333890 system_pods.go:89] "kube-apiserver-embed-certs-379362" [24a409bb-590d-4ac2-9246-7dba3fc3f946] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:11:12.158520  333890 system_pods.go:89] "kube-controller-manager-embed-certs-379362" [77968fd1-b384-4df9-86bd-289d910ba778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:11:12.158534  333890 system_pods.go:89] "kube-proxy-zmtpb" [c6bfb114-7843-46f4-8244-db73b00b7e6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 09:11:12.158543  333890 system_pods.go:89] "kube-scheduler-embed-certs-379362" [eb180ea3-0cfe-44f4-a995-7612e63240ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:11:12.158551  333890 system_pods.go:89] "storage-provisioner" [937cc208-1949-4660-a328-292224786f1b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:12.158563  333890 system_pods.go:126] duration metric: took 2.988393ms to wait for k8s-apps to be running ...
	I1213 09:11:12.158571  333890 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 09:11:12.158615  333890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:12.172411  333890 system_svc.go:56] duration metric: took 13.834615ms WaitForService to wait for kubelet
	I1213 09:11:12.172438  333890 kubeadm.go:587] duration metric: took 3.109721475s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:11:12.172457  333890 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:11:12.175344  333890 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:11:12.175368  333890 node_conditions.go:123] node cpu capacity is 8
	I1213 09:11:12.175391  333890 node_conditions.go:105] duration metric: took 2.92165ms to run NodePressure ...
	I1213 09:11:12.175405  333890 start.go:242] waiting for startup goroutines ...
	I1213 09:11:12.175422  333890 start.go:247] waiting for cluster config update ...
	I1213 09:11:12.175436  333890 start.go:256] writing updated cluster config ...
	I1213 09:11:12.175704  333890 ssh_runner.go:195] Run: rm -f paused
	I1213 09:11:12.179850  333890 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:12.183357  333890 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-24vtj" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 09:11:14.188818  333890 pod_ready.go:104] pod "coredns-66bc5c9577-24vtj" is not "Ready", error: <nil>
	W1213 09:11:16.189566  333890 pod_ready.go:104] pod "coredns-66bc5c9577-24vtj" is not "Ready", error: <nil>
	W1213 09:11:18.689697  333890 pod_ready.go:104] pod "coredns-66bc5c9577-24vtj" is not "Ready", error: <nil>
	W1213 09:11:20.690640  333890 pod_ready.go:104] pod "coredns-66bc5c9577-24vtj" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 13 09:10:45 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:45.904032511Z" level=info msg="Started container" PID=1737 containerID=0561bd6cf1b8762c87afce171bd0be1a1822139635822c7b29cd7c977c7e8a9b description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd/dashboard-metrics-scraper id=7baaca90-45af-4bd3-8bd6-08961f7f5c65 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dcf7c2f3e8b106ff04aeb344a3ab5775fd66feb42ed71c70c0ad3a1c402bcb6
	Dec 13 09:10:46 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:46.852452708Z" level=info msg="Removing container: a7354e50ac7317766710d2552e7522acecde83c588a2ab3a0e2f5c82931624a4" id=ff601e8a-5da4-4a94-b564-d4c658034ba7 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:10:46 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:46.862139643Z" level=info msg="Removed container a7354e50ac7317766710d2552e7522acecde83c588a2ab3a0e2f5c82931624a4: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd/dashboard-metrics-scraper" id=ff601e8a-5da4-4a94-b564-d4c658034ba7 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.878375416Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=54786de1-98a1-417a-836f-147824bff875 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.879281211Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ec231d73-d805-4dac-bf6a-7001c6a1e6fa name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.880295198Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7d6d004c-e5bd-4fd8-81a6-d9b87cffbd68 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.880423322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.88562017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.885787248Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1f88cc080f6576720f1efc3403a026d4cda8f18dc870a958e54fa3994c6e9585/merged/etc/passwd: no such file or directory"
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.885820257Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1f88cc080f6576720f1efc3403a026d4cda8f18dc870a958e54fa3994c6e9585/merged/etc/group: no such file or directory"
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.886123751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.916211118Z" level=info msg="Created container df4d7bf2845923fbd435913f93fe6d7454754cab6c72ec6ee2d93f963f342a80: kube-system/storage-provisioner/storage-provisioner" id=7d6d004c-e5bd-4fd8-81a6-d9b87cffbd68 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.916821187Z" level=info msg="Starting container: df4d7bf2845923fbd435913f93fe6d7454754cab6c72ec6ee2d93f963f342a80" id=28b7c4fd-b98b-443f-a0c5-5f6a80a83095 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.918637463Z" level=info msg="Started container" PID=1751 containerID=df4d7bf2845923fbd435913f93fe6d7454754cab6c72ec6ee2d93f963f342a80 description=kube-system/storage-provisioner/storage-provisioner id=28b7c4fd-b98b-443f-a0c5-5f6a80a83095 name=/runtime.v1.RuntimeService/StartContainer sandboxID=eec461f955bd63c9379b362686ddd5ee2d6eb9ff9d34a53d030714fe5093bd7a
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.745062005Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5d8c9942-da22-4f03-9078-1d4a3613decd name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.746175245Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4d84393f-b8a2-47c0-a698-c937252cb428 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.747282918Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd/dashboard-metrics-scraper" id=e91394f8-52ec-4f39-9436-5b3d5bd3c3b4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.747408641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.754983295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.755702822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.787780086Z" level=info msg="Created container a4fcaf3a6c74bd4a32be169b0f31cc448ee3af614c8b43b9b429453191a59f1b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd/dashboard-metrics-scraper" id=e91394f8-52ec-4f39-9436-5b3d5bd3c3b4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.788620289Z" level=info msg="Starting container: a4fcaf3a6c74bd4a32be169b0f31cc448ee3af614c8b43b9b429453191a59f1b" id=29dd111e-1e9f-4417-92a3-fb71831e80b3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.79044667Z" level=info msg="Started container" PID=1767 containerID=a4fcaf3a6c74bd4a32be169b0f31cc448ee3af614c8b43b9b429453191a59f1b description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd/dashboard-metrics-scraper id=29dd111e-1e9f-4417-92a3-fb71831e80b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dcf7c2f3e8b106ff04aeb344a3ab5775fd66feb42ed71c70c0ad3a1c402bcb6
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.893140212Z" level=info msg="Removing container: 0561bd6cf1b8762c87afce171bd0be1a1822139635822c7b29cd7c977c7e8a9b" id=da4469bb-7576-48cb-b974-b1e73a00d340 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.904216433Z" level=info msg="Removed container 0561bd6cf1b8762c87afce171bd0be1a1822139635822c7b29cd7c977c7e8a9b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd/dashboard-metrics-scraper" id=da4469bb-7576-48cb-b974-b1e73a00d340 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	a4fcaf3a6c74b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   3dcf7c2f3e8b1       dashboard-metrics-scraper-5f989dc9cf-l8kcd       kubernetes-dashboard
	df4d7bf284592       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   eec461f955bd6       storage-provisioner                              kube-system
	f28fd60687b20       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   9eedd4de5e872       kubernetes-dashboard-8694d4445c-jr9d8            kubernetes-dashboard
	4d79dc4e2f903       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   6526536ea7fe5       busybox                                          default
	08a51937a4354       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           56 seconds ago      Running             coredns                     0                   9955cf104081b       coredns-5dd5756b68-g66tb                         kube-system
	736464be00c6b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   eec461f955bd6       storage-provisioner                              kube-system
	c1e919ad40225       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   e7ec864ecbe6e       kindnet-9hllk                                    kube-system
	6d4755f502135       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           56 seconds ago      Running             kube-proxy                  0                   64fd5834b8e0b       kube-proxy-6bkvj                                 kube-system
	0cc4f5e85cb5d       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           59 seconds ago      Running             kube-apiserver              0                   d7a385e97242b       kube-apiserver-old-k8s-version-234538            kube-system
	ccfc11a0ddb83       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           59 seconds ago      Running             etcd                        0                   d56041d4a2382       etcd-old-k8s-version-234538                      kube-system
	e2292eb60503a       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           59 seconds ago      Running             kube-scheduler              0                   ed878c1443255       kube-scheduler-old-k8s-version-234538            kube-system
	b6d10fbd863a8       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           59 seconds ago      Running             kube-controller-manager     0                   f376c1694fa19       kube-controller-manager-old-k8s-version-234538   kube-system
	
	
	==> coredns [08a51937a4354c4bd30265f581946e42c57640c60d744db906265da68f2b4db2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45309 - 35110 "HINFO IN 8291808643669323414.2503676043295594618. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026324045s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-234538
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-234538
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=old-k8s-version-234538
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_09_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:09:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-234538
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:11:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:10:57 +0000   Sat, 13 Dec 2025 09:09:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:10:57 +0000   Sat, 13 Dec 2025 09:09:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:10:57 +0000   Sat, 13 Dec 2025 09:09:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:10:57 +0000   Sat, 13 Dec 2025 09:09:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-234538
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                b58ade41-ef0c-4ef7-817f-5090fbbdf23c
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-g66tb                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-old-k8s-version-234538                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m1s
	  kube-system                 kindnet-9hllk                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-234538             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-234538    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-6bkvj                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-234538             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-l8kcd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-jr9d8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 108s                 kube-proxy       
	  Normal  Starting                 56s                  kube-proxy       
	  Normal  Starting                 2m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-234538 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-234538 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-234538 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m1s                 kubelet          Node old-k8s-version-234538 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m1s                 kubelet          Node old-k8s-version-234538 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m1s                 kubelet          Node old-k8s-version-234538 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s                 node-controller  Node old-k8s-version-234538 event: Registered Node old-k8s-version-234538 in Controller
	  Normal  NodeReady                96s                  kubelet          Node old-k8s-version-234538 status is now: NodeReady
	  Normal  Starting                 60s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)    kubelet          Node old-k8s-version-234538 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)    kubelet          Node old-k8s-version-234538 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)    kubelet          Node old-k8s-version-234538 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                  node-controller  Node old-k8s-version-234538 event: Registered Node old-k8s-version-234538 in Controller
	
	
	==> dmesg <==
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[Dec13 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 92 30 31 19 d5 08 06
	[  +0.000699] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 37 88 97 25 dd 08 06
	[ +15.632521] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[ +20.216508] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 f7 62 38 81 f0 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[  +0.111008] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 c8 f7 8d 1a 5f 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[ +12.586108] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de cc 9f d9 2d 23 08 06
	[  +0.043792] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	[Dec13 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 62 87 e9 6b d3 37 08 06
	[  +0.000424] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	
	
	==> etcd [ccfc11a0ddb8317d8e1609f9778d0755dc87dac089178550d5aa53b7a0853424] <==
	{"level":"info","ts":"2025-12-13T09:10:24.598805Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-13T09:10:24.601512Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-13T09:10:25.344595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-13T09:10:25.34465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-13T09:10:25.34469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-13T09:10:25.344709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-13T09:10:25.344717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-13T09:10:25.344729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-13T09:10:25.344739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-13T09:10:25.345997Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T09:10:25.346117Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T09:10:25.347437Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-13T09:10:25.347437Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-13T09:10:25.345995Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-234538 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-13T09:10:25.350303Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-13T09:10:25.350339Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-13T09:10:32.665969Z","caller":"traceutil/trace.go:171","msg":"trace[957178955] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"110.042831ms","start":"2025-12-13T09:10:32.555904Z","end":"2025-12-13T09:10:32.665947Z","steps":["trace[957178955] 'process raft request'  (duration: 109.865345ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:10:32.793456Z","caller":"traceutil/trace.go:171","msg":"trace[285308251] transaction","detail":"{read_only:false; response_revision:500; number_of_response:1; }","duration":"123.478528ms","start":"2025-12-13T09:10:32.66995Z","end":"2025-12-13T09:10:32.793428Z","steps":["trace[285308251] 'process raft request'  (duration: 112.425856ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:10:33.018101Z","caller":"traceutil/trace.go:171","msg":"trace[143662783] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"134.069463ms","start":"2025-12-13T09:10:32.884014Z","end":"2025-12-13T09:10:33.018084Z","steps":["trace[143662783] 'process raft request'  (duration: 133.942358ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:10:33.234427Z","caller":"traceutil/trace.go:171","msg":"trace[136283724] transaction","detail":"{read_only:false; response_revision:508; number_of_response:1; }","duration":"120.201008ms","start":"2025-12-13T09:10:33.114201Z","end":"2025-12-13T09:10:33.234402Z","steps":["trace[136283724] 'process raft request'  (duration: 100.369069ms)","trace[136283724] 'compare'  (duration: 19.70829ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T09:10:33.37955Z","caller":"traceutil/trace.go:171","msg":"trace[1484540722] transaction","detail":"{read_only:false; response_revision:510; number_of_response:1; }","duration":"101.766513ms","start":"2025-12-13T09:10:33.277756Z","end":"2025-12-13T09:10:33.379522Z","steps":["trace[1484540722] 'process raft request'  (duration: 69.529081ms)","trace[1484540722] 'compare'  (duration: 32.06527ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T09:10:33.525571Z","caller":"traceutil/trace.go:171","msg":"trace[1673762831] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"141.675757ms","start":"2025-12-13T09:10:33.383877Z","end":"2025-12-13T09:10:33.525553Z","steps":["trace[1673762831] 'process raft request'  (duration: 141.359711ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:10:33.64567Z","caller":"traceutil/trace.go:171","msg":"trace[245186077] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"114.407535ms","start":"2025-12-13T09:10:33.53124Z","end":"2025-12-13T09:10:33.645648Z","steps":["trace[245186077] 'process raft request'  (duration: 100.160208ms)","trace[245186077] 'compare'  (duration: 14.130867ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T09:10:33.917335Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.719125ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357279072198348 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/old-k8s-version-234538.1880bb5414c68276\" mod_revision:512 > success:<request_put:<key:\"/registry/events/default/old-k8s-version-234538.1880bb5414c68276\" value_size:658 lease:6414985242217422505 >> failure:<request_range:<key:\"/registry/events/default/old-k8s-version-234538.1880bb5414c68276\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-13T09:10:33.917751Z","caller":"traceutil/trace.go:171","msg":"trace[1239134521] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"205.961441ms","start":"2025-12-13T09:10:33.711759Z","end":"2025-12-13T09:10:33.917721Z","steps":["trace[1239134521] 'process raft request'  (duration: 88.278296ms)","trace[1239134521] 'compare'  (duration: 116.584159ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:11:23 up 53 min,  0 user,  load average: 3.93, 3.52, 2.36
	Linux old-k8s-version-234538 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c1e919ad40225b12a51ff83b8b1ab06eb950ed45acc023817130b2bbb115503a] <==
	I1213 09:10:27.352935       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 09:10:27.353761       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1213 09:10:27.353984       1 main.go:148] setting mtu 1500 for CNI 
	I1213 09:10:27.354016       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 09:10:27.354038       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T09:10:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 09:10:27.557794       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 09:10:27.653720       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 09:10:27.653765       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 09:10:27.653992       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 09:10:27.858363       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 09:10:27.858391       1 metrics.go:72] Registering metrics
	I1213 09:10:27.858438       1 controller.go:711] "Syncing nftables rules"
	I1213 09:10:37.565593       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 09:10:37.565648       1 main.go:301] handling current node
	I1213 09:10:47.558230       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 09:10:47.558264       1 main.go:301] handling current node
	I1213 09:10:57.566609       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 09:10:57.566639       1 main.go:301] handling current node
	I1213 09:11:07.557827       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 09:11:07.557883       1 main.go:301] handling current node
	I1213 09:11:17.565285       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 09:11:17.565327       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0cc4f5e85cb5d4e6d07eeb129540177624ba0b7b05e38e98203ef68cb53670db] <==
	I1213 09:10:26.520075       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1213 09:10:26.584372       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:10:26.619842       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1213 09:10:26.619857       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 09:10:26.619888       1 shared_informer.go:318] Caches are synced for configmaps
	I1213 09:10:26.620101       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1213 09:10:26.620123       1 aggregator.go:166] initial CRD sync complete...
	I1213 09:10:26.620128       1 autoregister_controller.go:141] Starting autoregister controller
	I1213 09:10:26.620134       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 09:10:26.620139       1 cache.go:39] Caches are synced for autoregister controller
	I1213 09:10:26.620560       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1213 09:10:26.620775       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1213 09:10:26.620835       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1213 09:10:26.632436       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1213 09:10:27.525057       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 09:10:27.642827       1 controller.go:624] quota admission added evaluator for: namespaces
	I1213 09:10:27.693810       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1213 09:10:27.720348       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:10:27.733383       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:10:27.746792       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1213 09:10:27.793042       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.70.55"}
	I1213 09:10:27.812130       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.178.132"}
	I1213 09:10:38.989985       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1213 09:10:39.004559       1 controller.go:624] quota admission added evaluator for: endpoints
	I1213 09:10:39.019818       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b6d10fbd863a8a81004a0b20dba55d8b74f364e15f804329d14979332876f75a] <==
	I1213 09:10:39.043922       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1213 09:10:39.043986       1 taint_manager.go:211] "Sending events to api server"
	I1213 09:10:39.044204       1 event.go:307] "Event occurred" object="old-k8s-version-234538" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-234538 event: Registered Node old-k8s-version-234538 in Controller"
	I1213 09:10:39.048703       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.749316ms"
	I1213 09:10:39.049340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="77.366µs"
	I1213 09:10:39.049531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="108.324µs"
	I1213 09:10:39.054524       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="15.725072ms"
	I1213 09:10:39.054669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.532µs"
	I1213 09:10:39.058613       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.93µs"
	I1213 09:10:39.077846       1 shared_informer.go:318] Caches are synced for disruption
	I1213 09:10:39.149383       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1213 09:10:39.167128       1 shared_informer.go:318] Caches are synced for resource quota
	I1213 09:10:39.232404       1 shared_informer.go:318] Caches are synced for resource quota
	I1213 09:10:39.553619       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 09:10:39.626203       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 09:10:39.626252       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1213 09:10:42.864159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.486312ms"
	I1213 09:10:42.864318       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.889µs"
	I1213 09:10:45.856533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.783µs"
	I1213 09:10:46.863897       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="96.778µs"
	I1213 09:10:47.865041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="142.87µs"
	I1213 09:11:01.903578       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="140.16µs"
	I1213 09:11:07.191772       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.893039ms"
	I1213 09:11:07.191989       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="138.045µs"
	I1213 09:11:09.354955       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.613µs"
	
	
	==> kube-proxy [6d4755f502135ef94a11ec27217a6459bc85937c42dc06e9a1e638df610779fb] <==
	I1213 09:10:27.160617       1 server_others.go:69] "Using iptables proxy"
	I1213 09:10:27.171062       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1213 09:10:27.190048       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 09:10:27.192527       1 server_others.go:152] "Using iptables Proxier"
	I1213 09:10:27.192566       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1213 09:10:27.192574       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1213 09:10:27.192615       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1213 09:10:27.192921       1 server.go:846] "Version info" version="v1.28.0"
	I1213 09:10:27.193035       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:10:27.193876       1 config.go:315] "Starting node config controller"
	I1213 09:10:27.193936       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1213 09:10:27.194827       1 config.go:188] "Starting service config controller"
	I1213 09:10:27.194895       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1213 09:10:27.195031       1 config.go:97] "Starting endpoint slice config controller"
	I1213 09:10:27.195104       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1213 09:10:27.294162       1 shared_informer.go:318] Caches are synced for node config
	I1213 09:10:27.295673       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1213 09:10:27.295681       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [e2292eb60503a271d5b03a7e7a8cf528dea0e07edd89ce5c55a81bf4b0c2b310] <==
	I1213 09:10:25.248402       1 serving.go:348] Generated self-signed cert in-memory
	W1213 09:10:26.542354       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 09:10:26.542393       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 09:10:26.542409       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 09:10:26.542420       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 09:10:26.557586       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1213 09:10:26.558627       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:10:26.560698       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:10:26.560774       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1213 09:10:26.565044       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1213 09:10:26.565147       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1213 09:10:26.574256       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1213 09:10:26.574314       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1213 09:10:28.161100       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 13 09:10:39 old-k8s-version-234538 kubelet[721]: I1213 09:10:39.141352     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9751c53e-7c8a-44eb-b1b0-bff398385c78-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-jr9d8\" (UID: \"9751c53e-7c8a-44eb-b1b0-bff398385c78\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jr9d8"
	Dec 13 09:10:39 old-k8s-version-234538 kubelet[721]: I1213 09:10:39.141425     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/630854c3-c982-45a0-9ded-c90136790884-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-l8kcd\" (UID: \"630854c3-c982-45a0-9ded-c90136790884\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd"
	Dec 13 09:10:39 old-k8s-version-234538 kubelet[721]: I1213 09:10:39.141583     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r945v\" (UniqueName: \"kubernetes.io/projected/630854c3-c982-45a0-9ded-c90136790884-kube-api-access-r945v\") pod \"dashboard-metrics-scraper-5f989dc9cf-l8kcd\" (UID: \"630854c3-c982-45a0-9ded-c90136790884\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd"
	Dec 13 09:10:39 old-k8s-version-234538 kubelet[721]: I1213 09:10:39.141631     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7hrb\" (UniqueName: \"kubernetes.io/projected/9751c53e-7c8a-44eb-b1b0-bff398385c78-kube-api-access-t7hrb\") pod \"kubernetes-dashboard-8694d4445c-jr9d8\" (UID: \"9751c53e-7c8a-44eb-b1b0-bff398385c78\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jr9d8"
	Dec 13 09:10:42 old-k8s-version-234538 kubelet[721]: I1213 09:10:42.854822     721 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jr9d8" podStartSLOduration=0.860508944 podCreationTimestamp="2025-12-13 09:10:39 +0000 UTC" firstStartedPulling="2025-12-13 09:10:39.361248113 +0000 UTC m=+15.741640298" lastFinishedPulling="2025-12-13 09:10:42.355500508 +0000 UTC m=+18.735892704" observedRunningTime="2025-12-13 09:10:42.854543937 +0000 UTC m=+19.234936137" watchObservedRunningTime="2025-12-13 09:10:42.85476135 +0000 UTC m=+19.235153548"
	Dec 13 09:10:45 old-k8s-version-234538 kubelet[721]: I1213 09:10:45.844093     721 scope.go:117] "RemoveContainer" containerID="a7354e50ac7317766710d2552e7522acecde83c588a2ab3a0e2f5c82931624a4"
	Dec 13 09:10:46 old-k8s-version-234538 kubelet[721]: I1213 09:10:46.850895     721 scope.go:117] "RemoveContainer" containerID="a7354e50ac7317766710d2552e7522acecde83c588a2ab3a0e2f5c82931624a4"
	Dec 13 09:10:46 old-k8s-version-234538 kubelet[721]: I1213 09:10:46.851092     721 scope.go:117] "RemoveContainer" containerID="0561bd6cf1b8762c87afce171bd0be1a1822139635822c7b29cd7c977c7e8a9b"
	Dec 13 09:10:46 old-k8s-version-234538 kubelet[721]: E1213 09:10:46.851497     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l8kcd_kubernetes-dashboard(630854c3-c982-45a0-9ded-c90136790884)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd" podUID="630854c3-c982-45a0-9ded-c90136790884"
	Dec 13 09:10:47 old-k8s-version-234538 kubelet[721]: I1213 09:10:47.855167     721 scope.go:117] "RemoveContainer" containerID="0561bd6cf1b8762c87afce171bd0be1a1822139635822c7b29cd7c977c7e8a9b"
	Dec 13 09:10:47 old-k8s-version-234538 kubelet[721]: E1213 09:10:47.855542     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l8kcd_kubernetes-dashboard(630854c3-c982-45a0-9ded-c90136790884)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd" podUID="630854c3-c982-45a0-9ded-c90136790884"
	Dec 13 09:10:49 old-k8s-version-234538 kubelet[721]: I1213 09:10:49.339236     721 scope.go:117] "RemoveContainer" containerID="0561bd6cf1b8762c87afce171bd0be1a1822139635822c7b29cd7c977c7e8a9b"
	Dec 13 09:10:49 old-k8s-version-234538 kubelet[721]: E1213 09:10:49.339551     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l8kcd_kubernetes-dashboard(630854c3-c982-45a0-9ded-c90136790884)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd" podUID="630854c3-c982-45a0-9ded-c90136790884"
	Dec 13 09:10:57 old-k8s-version-234538 kubelet[721]: I1213 09:10:57.877834     721 scope.go:117] "RemoveContainer" containerID="736464be00c6bd519200cf950d8d2522dd6f74b3dccbd17d0288278dc3d5bd05"
	Dec 13 09:11:01 old-k8s-version-234538 kubelet[721]: I1213 09:11:01.744404     721 scope.go:117] "RemoveContainer" containerID="0561bd6cf1b8762c87afce171bd0be1a1822139635822c7b29cd7c977c7e8a9b"
	Dec 13 09:11:01 old-k8s-version-234538 kubelet[721]: I1213 09:11:01.891636     721 scope.go:117] "RemoveContainer" containerID="0561bd6cf1b8762c87afce171bd0be1a1822139635822c7b29cd7c977c7e8a9b"
	Dec 13 09:11:01 old-k8s-version-234538 kubelet[721]: I1213 09:11:01.891902     721 scope.go:117] "RemoveContainer" containerID="a4fcaf3a6c74bd4a32be169b0f31cc448ee3af614c8b43b9b429453191a59f1b"
	Dec 13 09:11:01 old-k8s-version-234538 kubelet[721]: E1213 09:11:01.892293     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l8kcd_kubernetes-dashboard(630854c3-c982-45a0-9ded-c90136790884)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd" podUID="630854c3-c982-45a0-9ded-c90136790884"
	Dec 13 09:11:09 old-k8s-version-234538 kubelet[721]: I1213 09:11:09.339353     721 scope.go:117] "RemoveContainer" containerID="a4fcaf3a6c74bd4a32be169b0f31cc448ee3af614c8b43b9b429453191a59f1b"
	Dec 13 09:11:09 old-k8s-version-234538 kubelet[721]: E1213 09:11:09.339790     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l8kcd_kubernetes-dashboard(630854c3-c982-45a0-9ded-c90136790884)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd" podUID="630854c3-c982-45a0-9ded-c90136790884"
	Dec 13 09:11:21 old-k8s-version-234538 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 09:11:21 old-k8s-version-234538 kubelet[721]: I1213 09:11:21.144864     721 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 13 09:11:21 old-k8s-version-234538 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 09:11:21 old-k8s-version-234538 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:11:21 old-k8s-version-234538 systemd[1]: kubelet.service: Consumed 1.659s CPU time.
	
	
	==> kubernetes-dashboard [f28fd60687b20846a254002fb7cb4119ba9d01d31a94e6c8eec0afa665e5faa0] <==
	2025/12/13 09:10:42 Starting overwatch
	2025/12/13 09:10:42 Using namespace: kubernetes-dashboard
	2025/12/13 09:10:42 Using in-cluster config to connect to apiserver
	2025/12/13 09:10:42 Using secret token for csrf signing
	2025/12/13 09:10:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 09:10:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 09:10:42 Successful initial request to the apiserver, version: v1.28.0
	2025/12/13 09:10:42 Generating JWE encryption key
	2025/12/13 09:10:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 09:10:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 09:10:42 Initializing JWE encryption key from synchronized object
	2025/12/13 09:10:42 Creating in-cluster Sidecar client
	2025/12/13 09:10:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 09:10:42 Serving insecurely on HTTP port: 9090
	2025/12/13 09:11:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [736464be00c6bd519200cf950d8d2522dd6f74b3dccbd17d0288278dc3d5bd05] <==
	I1213 09:10:27.142691       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 09:10:57.146921       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [df4d7bf2845923fbd435913f93fe6d7454754cab6c72ec6ee2d93f963f342a80] <==
	I1213 09:10:57.931974       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:10:57.940138       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:10:57.940197       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 09:11:15.339615       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 09:11:15.339749       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"60b54cd0-ddd8-481a-8123-7f67477a3495", APIVersion:"v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-234538_118a40aa-3036-4a78-97b2-c632df866bd8 became leader
	I1213 09:11:15.339798       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-234538_118a40aa-3036-4a78-97b2-c632df866bd8!
	I1213 09:11:15.440502       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-234538_118a40aa-3036-4a78-97b2-c632df866bd8!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-234538 -n old-k8s-version-234538
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-234538 -n old-k8s-version-234538: exit status 2 (328.899134ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-234538 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-234538
helpers_test.go:244: (dbg) docker inspect old-k8s-version-234538:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e",
	        "Created": "2025-12-13T09:09:04.827842959Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 324903,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:10:17.20091342Z",
	            "FinishedAt": "2025-12-13T09:10:16.209321608Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e/hosts",
	        "LogPath": "/var/lib/docker/containers/9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e/9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e-json.log",
	        "Name": "/old-k8s-version-234538",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-234538:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-234538",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9956457b660b9bc78b29b4e7f05ab5dfdd2537934a81c96e5281641c980dbf2e",
	                "LowerDir": "/var/lib/docker/overlay2/3ff0536271632f931d6d08f0fc2e635f1db6acd2a26a40bb7a01b3d549888fae-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ff0536271632f931d6d08f0fc2e635f1db6acd2a26a40bb7a01b3d549888fae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ff0536271632f931d6d08f0fc2e635f1db6acd2a26a40bb7a01b3d549888fae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ff0536271632f931d6d08f0fc2e635f1db6acd2a26a40bb7a01b3d549888fae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-234538",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-234538/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-234538",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-234538",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-234538",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "01bb6e6fa13e1364cd3b276174b95fdc3ffc32c99eb3ae3bb02ee98b4ef570c4",
	            "SandboxKey": "/var/run/docker/netns/01bb6e6fa13e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-234538": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03cd5b8c21bed175419d3254147f83d57d2a9fa170523cc5fcd50bb748af5603",
	                    "EndpointID": "0b69255072d972c007882ec64d0bb2fbdf6285823076eea000d7681f1a1ec0be",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "fe:88:91:21:4d:45",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-234538",
	                        "9956457b660b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234538 -n old-k8s-version-234538
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234538 -n old-k8s-version-234538: exit status 2 (341.645615ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-234538 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-234538 logs -n 25: (1.208683289s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-833990 sudo crio config                                                                                                                                                                                                             │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ delete  │ -p bridge-833990                                                                                                                                                                                                                              │ bridge-833990                │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:09 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:10 UTC │
	│ addons  │ enable metrics-server -p no-preload-291522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-234538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │                     │
	│ stop    │ -p no-preload-291522 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:09 UTC │ 13 Dec 25 09:10 UTC │
	│ stop    │ -p old-k8s-version-234538 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ addons  │ enable dashboard -p no-preload-291522 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p no-preload-291522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-234538 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p old-k8s-version-234538 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p kubernetes-upgrade-814560                                                                                                                                                                                                                  │ kubernetes-upgrade-814560    │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ delete  │ -p disable-driver-mounts-779931                                                                                                                                                                                                               │ disable-driver-mounts-779931 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                      │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable metrics-server -p embed-certs-379362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │                     │
	│ stop    │ -p embed-certs-379362 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p embed-certs-379362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                        │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ image   │ no-preload-291522 image list --format=json                                                                                                                                                                                                    │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p no-preload-291522 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-361270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ image   │ old-k8s-version-234538 image list --format=json                                                                                                                                                                                               │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ stop    │ -p default-k8s-diff-port-361270 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ pause   │ -p old-k8s-version-234538 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ delete  │ -p no-preload-291522                                                                                                                                                                                                                          │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:11:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:11:01.859652  333890 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:01.859763  333890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:01.859768  333890 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:01.859780  333890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:01.860007  333890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:11:01.860461  333890 out.go:368] Setting JSON to false
	I1213 09:11:01.861836  333890 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3214,"bootTime":1765613848,"procs":357,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:11:01.861905  333890 start.go:143] virtualization: kvm guest
	I1213 09:11:01.863731  333890 out.go:179] * [embed-certs-379362] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:11:01.865249  333890 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:11:01.865281  333890 notify.go:221] Checking for updates...
	I1213 09:11:01.867359  333890 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:11:01.868519  333890 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:01.869842  333890 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:11:01.871012  333890 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:11:01.872143  333890 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:11:01.873683  333890 config.go:182] Loaded profile config "embed-certs-379362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:01.874233  333890 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:11:01.901548  333890 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 09:11:01.901656  333890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:01.959403  333890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 09:11:01.949301411 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:01.959565  333890 docker.go:319] overlay module found
	I1213 09:11:01.961826  333890 out.go:179] * Using the docker driver based on existing profile
	W1213 09:10:57.872528  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	W1213 09:10:59.873309  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	I1213 09:11:01.962862  333890 start.go:309] selected driver: docker
	I1213 09:11:01.962874  333890 start.go:927] validating driver "docker" against &{Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:01.962966  333890 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:11:01.963566  333890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:02.021259  333890 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 09:11:02.010959916 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:02.021565  333890 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:11:02.021623  333890 cni.go:84] Creating CNI manager for ""
	I1213 09:11:02.021676  333890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:02.021713  333890 start.go:353] cluster config:
	{Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:02.023438  333890 out.go:179] * Starting "embed-certs-379362" primary control-plane node in "embed-certs-379362" cluster
	I1213 09:11:02.024571  333890 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 09:11:02.025856  333890 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:11:02.026959  333890 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:11:02.026992  333890 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 09:11:02.027007  333890 cache.go:65] Caching tarball of preloaded images
	I1213 09:11:02.027033  333890 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:11:02.027086  333890 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:11:02.027100  333890 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 09:11:02.027214  333890 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/config.json ...
	I1213 09:11:02.048858  333890 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:11:02.048877  333890 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:11:02.048892  333890 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:11:02.048922  333890 start.go:360] acquireMachinesLock for embed-certs-379362: {Name:mk2ae32cc4beadbba6a2e4810e36036ee6a949ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:11:02.048994  333890 start.go:364] duration metric: took 42.67µs to acquireMachinesLock for "embed-certs-379362"
	I1213 09:11:02.049011  333890 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:11:02.049016  333890 fix.go:54] fixHost starting: 
	I1213 09:11:02.049233  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:02.068302  333890 fix.go:112] recreateIfNeeded on embed-certs-379362: state=Stopped err=<nil>
	W1213 09:11:02.068327  333890 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 09:10:59.583124  328914 node_ready.go:57] node "default-k8s-diff-port-361270" has "Ready":"False" status (will retry)
	W1213 09:11:02.082475  328914 node_ready.go:57] node "default-k8s-diff-port-361270" has "Ready":"False" status (will retry)
	W1213 09:11:02.629196  323665 pod_ready.go:104] pod "coredns-7d764666f9-r95cr" is not "Ready", error: <nil>
	I1213 09:11:03.625367  323665 pod_ready.go:94] pod "coredns-7d764666f9-r95cr" is "Ready"
	I1213 09:11:03.625394  323665 pod_ready.go:86] duration metric: took 37.505010805s for pod "coredns-7d764666f9-r95cr" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.628034  323665 pod_ready.go:83] waiting for pod "etcd-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.631736  323665 pod_ready.go:94] pod "etcd-no-preload-291522" is "Ready"
	I1213 09:11:03.631760  323665 pod_ready.go:86] duration metric: took 3.705789ms for pod "etcd-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.633687  323665 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.637223  323665 pod_ready.go:94] pod "kube-apiserver-no-preload-291522" is "Ready"
	I1213 09:11:03.637246  323665 pod_ready.go:86] duration metric: took 3.541562ms for pod "kube-apiserver-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.638918  323665 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:03.823946  323665 pod_ready.go:94] pod "kube-controller-manager-no-preload-291522" is "Ready"
	I1213 09:11:03.823973  323665 pod_ready.go:86] duration metric: took 185.03756ms for pod "kube-controller-manager-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:04.024005  323665 pod_ready.go:83] waiting for pod "kube-proxy-ktgbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:04.424202  323665 pod_ready.go:94] pod "kube-proxy-ktgbz" is "Ready"
	I1213 09:11:04.424226  323665 pod_ready.go:86] duration metric: took 400.196554ms for pod "kube-proxy-ktgbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:04.624268  323665 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:05.023621  323665 pod_ready.go:94] pod "kube-scheduler-no-preload-291522" is "Ready"
	I1213 09:11:05.023647  323665 pod_ready.go:86] duration metric: took 399.354065ms for pod "kube-scheduler-no-preload-291522" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:05.023659  323665 pod_ready.go:40] duration metric: took 38.976009117s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:05.066541  323665 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 09:11:05.068302  323665 out.go:179] * Done! kubectl is now configured to use "no-preload-291522" cluster and "default" namespace by default
	I1213 09:11:02.070162  333890 out.go:252] * Restarting existing docker container for "embed-certs-379362" ...
	I1213 09:11:02.070221  333890 cli_runner.go:164] Run: docker start embed-certs-379362
	I1213 09:11:02.321118  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:02.339633  333890 kic.go:430] container "embed-certs-379362" state is running.
	I1213 09:11:02.340097  333890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-379362
	I1213 09:11:02.359827  333890 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/config.json ...
	I1213 09:11:02.360100  333890 machine.go:94] provisionDockerMachine start ...
	I1213 09:11:02.360192  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:02.380390  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:02.380635  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:02.380649  333890 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:11:02.381372  333890 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45890->127.0.0.1:33123: read: connection reset by peer
	I1213 09:11:05.518562  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-379362
	
	I1213 09:11:05.518593  333890 ubuntu.go:182] provisioning hostname "embed-certs-379362"
	I1213 09:11:05.518644  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:05.537736  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:05.538011  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:05.538026  333890 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-379362 && echo "embed-certs-379362" | sudo tee /etc/hostname
	I1213 09:11:05.683114  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-379362
	
	I1213 09:11:05.683217  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:05.702249  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:05.702628  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:05.702658  333890 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-379362' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-379362/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-379362' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:11:05.839172  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:11:05.839203  333890 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-5776/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-5776/.minikube}
	I1213 09:11:05.839221  333890 ubuntu.go:190] setting up certificates
	I1213 09:11:05.839232  333890 provision.go:84] configureAuth start
	I1213 09:11:05.839277  333890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-379362
	I1213 09:11:05.857894  333890 provision.go:143] copyHostCerts
	I1213 09:11:05.857989  333890 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem, removing ...
	I1213 09:11:05.858008  333890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem
	I1213 09:11:05.858077  333890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem (1675 bytes)
	I1213 09:11:05.858209  333890 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem, removing ...
	I1213 09:11:05.858219  333890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem
	I1213 09:11:05.858255  333890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem (1082 bytes)
	I1213 09:11:05.858308  333890 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem, removing ...
	I1213 09:11:05.858315  333890 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem
	I1213 09:11:05.858338  333890 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem (1123 bytes)
	I1213 09:11:05.858384  333890 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem org=jenkins.embed-certs-379362 san=[127.0.0.1 192.168.85.2 embed-certs-379362 localhost minikube]
	I1213 09:11:05.995748  333890 provision.go:177] copyRemoteCerts
	I1213 09:11:05.995808  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:11:05.995841  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.014933  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.113890  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:11:06.131828  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1213 09:11:06.149744  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:11:06.167004  333890 provision.go:87] duration metric: took 327.760831ms to configureAuth
	I1213 09:11:06.167034  333890 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:11:06.167248  333890 config.go:182] Loaded profile config "embed-certs-379362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:06.167371  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.186434  333890 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:06.186700  333890 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1213 09:11:06.186718  333890 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 09:11:06.519456  333890 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 09:11:06.519500  333890 machine.go:97] duration metric: took 4.159363834s to provisionDockerMachine
	I1213 09:11:06.519515  333890 start.go:293] postStartSetup for "embed-certs-379362" (driver="docker")
	I1213 09:11:06.519528  333890 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:11:06.519593  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:11:06.519656  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.538380  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.634842  333890 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:11:06.638452  333890 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:11:06.638473  333890 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:11:06.638495  333890 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/addons for local assets ...
	I1213 09:11:06.638554  333890 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/files for local assets ...
	I1213 09:11:06.638653  333890 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem -> 93032.pem in /etc/ssl/certs
	I1213 09:11:06.638763  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:11:06.646671  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:11:06.664174  333890 start.go:296] duration metric: took 144.644973ms for postStartSetup
	I1213 09:11:06.664268  333890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:11:06.664305  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.683615  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.779502  333890 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:11:06.785404  333890 fix.go:56] duration metric: took 4.736380482s for fixHost
	I1213 09:11:06.785434  333890 start.go:83] releasing machines lock for "embed-certs-379362", held for 4.736428362s
	I1213 09:11:06.785524  333890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-379362
	I1213 09:11:06.808003  333890 ssh_runner.go:195] Run: cat /version.json
	I1213 09:11:06.808061  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.808078  333890 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:11:06.808172  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:06.833412  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:06.833605  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	W1213 09:11:02.373908  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	W1213 09:11:04.872547  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	W1213 09:11:06.873449  324697 pod_ready.go:104] pod "coredns-5dd5756b68-g66tb" is not "Ready", error: <nil>
	I1213 09:11:06.984735  333890 ssh_runner.go:195] Run: systemctl --version
	I1213 09:11:06.991583  333890 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 09:11:07.026938  333890 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:11:07.031772  333890 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:11:07.031840  333890 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:11:07.039992  333890 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 09:11:07.040013  333890 start.go:496] detecting cgroup driver to use...
	I1213 09:11:07.040046  333890 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 09:11:07.040090  333890 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:11:07.054785  333890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:11:07.068014  333890 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:11:07.068059  333890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:11:07.083003  333890 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:11:07.096366  333890 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:11:07.183847  333890 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:11:07.269721  333890 docker.go:234] disabling docker service ...
	I1213 09:11:07.269771  333890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:11:07.285161  333890 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:11:07.297389  333890 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:11:07.384882  333890 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:11:07.467142  333890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:11:07.481367  333890 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:11:07.495794  333890 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 09:11:07.495842  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.505016  333890 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 09:11:07.505072  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.514873  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.523864  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.532764  333890 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:11:07.541036  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.549898  333890 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.558670  333890 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:07.568189  333890 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:11:07.575855  333890 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:11:07.582903  333890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:07.670568  333890 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 09:11:07.843644  333890 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 09:11:07.843715  333890 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 09:11:07.848433  333890 start.go:564] Will wait 60s for crictl version
	I1213 09:11:07.848528  333890 ssh_runner.go:195] Run: which crictl
	I1213 09:11:07.852256  333890 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:11:07.876837  333890 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 09:11:07.876932  333890 ssh_runner.go:195] Run: crio --version
	I1213 09:11:07.904955  333890 ssh_runner.go:195] Run: crio --version
	I1213 09:11:07.933896  333890 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	W1213 09:11:04.083292  328914 node_ready.go:57] node "default-k8s-diff-port-361270" has "Ready":"False" status (will retry)
	I1213 09:11:06.583127  328914 node_ready.go:49] node "default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:06.583165  328914 node_ready.go:38] duration metric: took 11.003480314s for node "default-k8s-diff-port-361270" to be "Ready" ...
	I1213 09:11:06.583181  328914 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:11:06.583231  328914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:11:06.594500  328914 api_server.go:72] duration metric: took 11.299110433s to wait for apiserver process to appear ...
	I1213 09:11:06.594525  328914 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:11:06.594541  328914 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1213 09:11:06.599417  328914 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1213 09:11:06.600336  328914 api_server.go:141] control plane version: v1.34.2
	I1213 09:11:06.600358  328914 api_server.go:131] duration metric: took 5.826824ms to wait for apiserver health ...
	I1213 09:11:06.600365  328914 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:11:06.603252  328914 system_pods.go:59] 8 kube-system pods found
	I1213 09:11:06.603278  328914 system_pods.go:61] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:06.603283  328914 system_pods.go:61] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:06.603289  328914 system_pods.go:61] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:06.603292  328914 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:06.603296  328914 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:06.603302  328914 system_pods.go:61] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:06.603305  328914 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:06.603310  328914 system_pods.go:61] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:06.603316  328914 system_pods.go:74] duration metric: took 2.9457ms to wait for pod list to return data ...
	I1213 09:11:06.603325  328914 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:11:06.605317  328914 default_sa.go:45] found service account: "default"
	I1213 09:11:06.605334  328914 default_sa.go:55] duration metric: took 2.001953ms for default service account to be created ...
	I1213 09:11:06.605341  328914 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 09:11:06.607611  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:06.607633  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:06.607645  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:06.607651  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:06.607654  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:06.607658  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:06.607662  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:06.607665  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:06.607669  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:06.607685  328914 retry.go:31] will retry after 272.651119ms: missing components: kube-dns
	I1213 09:11:06.885001  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:06.885038  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:06.885046  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:06.885055  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:06.885061  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:06.885067  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:06.885073  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:06.885078  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:06.885087  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:06.885109  328914 retry.go:31] will retry after 389.523569ms: missing components: kube-dns
	I1213 09:11:07.279258  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:07.279287  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:07.279293  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:07.279298  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:07.279302  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:07.279305  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:07.279308  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:07.279317  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:07.279322  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:07.279335  328914 retry.go:31] will retry after 448.006807ms: missing components: kube-dns
	I1213 09:11:07.732933  328914 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:07.732978  328914 system_pods.go:89] "coredns-66bc5c9577-xhjmn" [3906f322-2c03-4d49-a6db-af27bf718d3d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:07.732988  328914 system_pods.go:89] "etcd-default-k8s-diff-port-361270" [395142f3-fc34-4a91-880e-27f01cde7b48] Running
	I1213 09:11:07.732997  328914 system_pods.go:89] "kindnet-g6h8g" [7f054428-09d6-450c-befd-066cedc40ad4] Running
	I1213 09:11:07.733002  328914 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361270" [76b2cb8e-bc9d-4145-890f-8a02f78a02c1] Running
	I1213 09:11:07.733008  328914 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361270" [04282173-4e41-4f4d-98ab-6a3c24a45f79] Running
	I1213 09:11:07.733012  328914 system_pods.go:89] "kube-proxy-78nr2" [337a66f0-19f1-4351-bebf-7872d75ebf3e] Running
	I1213 09:11:07.733016  328914 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361270" [6ea66785-2af0-4772-9b34-fb73de06add1] Running
	I1213 09:11:07.733020  328914 system_pods.go:89] "storage-provisioner" [9268f47f-d59e-482a-82e1-d77d41735195] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:07.733031  328914 system_pods.go:126] duration metric: took 1.127684936s to wait for k8s-apps to be running ...
	I1213 09:11:07.733038  328914 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 09:11:07.733082  328914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:07.749643  328914 system_svc.go:56] duration metric: took 16.594824ms WaitForService to wait for kubelet
	I1213 09:11:07.749674  328914 kubeadm.go:587] duration metric: took 12.454300158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:11:07.749698  328914 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:11:07.752080  328914 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:11:07.752112  328914 node_conditions.go:123] node cpu capacity is 8
	I1213 09:11:07.752131  328914 node_conditions.go:105] duration metric: took 2.42792ms to run NodePressure ...
	I1213 09:11:07.752146  328914 start.go:242] waiting for startup goroutines ...
	I1213 09:11:07.752160  328914 start.go:247] waiting for cluster config update ...
	I1213 09:11:07.752173  328914 start.go:256] writing updated cluster config ...
	I1213 09:11:07.752508  328914 ssh_runner.go:195] Run: rm -f paused
	I1213 09:11:07.757523  328914 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:07.761238  328914 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xhjmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.766432  328914 pod_ready.go:94] pod "coredns-66bc5c9577-xhjmn" is "Ready"
	I1213 09:11:07.766458  328914 pod_ready.go:86] duration metric: took 5.192246ms for pod "coredns-66bc5c9577-xhjmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.832062  328914 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.840179  328914 pod_ready.go:94] pod "etcd-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:07.840203  328914 pod_ready.go:86] duration metric: took 8.11705ms for pod "etcd-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.842550  328914 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.846547  328914 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:07.846570  328914 pod_ready.go:86] duration metric: took 3.999501ms for pod "kube-apiserver-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.848547  328914 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.161326  328914 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:08.161349  328914 pod_ready.go:86] duration metric: took 312.780385ms for pod "kube-controller-manager-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.372943  324697 pod_ready.go:94] pod "coredns-5dd5756b68-g66tb" is "Ready"
	I1213 09:11:07.372967  324697 pod_ready.go:86] duration metric: took 39.505999616s for pod "coredns-5dd5756b68-g66tb" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.375663  324697 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.379892  324697 pod_ready.go:94] pod "etcd-old-k8s-version-234538" is "Ready"
	I1213 09:11:07.379916  324697 pod_ready.go:86] duration metric: took 4.234738ms for pod "etcd-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.382722  324697 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.386579  324697 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-234538" is "Ready"
	I1213 09:11:07.386602  324697 pod_ready.go:86] duration metric: took 3.859665ms for pod "kube-apiserver-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.388935  324697 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.570936  324697 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-234538" is "Ready"
	I1213 09:11:07.570963  324697 pod_ready.go:86] duration metric: took 182.006223ms for pod "kube-controller-manager-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:07.772324  324697 pod_ready.go:83] waiting for pod "kube-proxy-6bkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.173608  324697 pod_ready.go:94] pod "kube-proxy-6bkvj" is "Ready"
	I1213 09:11:08.173638  324697 pod_ready.go:86] duration metric: took 401.292694ms for pod "kube-proxy-6bkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.372409  324697 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.772063  324697 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-234538" is "Ready"
	I1213 09:11:08.772095  324697 pod_ready.go:86] duration metric: took 399.659792ms for pod "kube-scheduler-old-k8s-version-234538" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.772110  324697 pod_ready.go:40] duration metric: took 40.909481149s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:08.832194  324697 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1213 09:11:08.834797  324697 out.go:203] 
	W1213 09:11:08.836008  324697 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1213 09:11:08.837190  324697 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1213 09:11:08.838445  324697 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-234538" cluster and "default" namespace by default
	I1213 09:11:07.935243  333890 cli_runner.go:164] Run: docker network inspect embed-certs-379362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:11:07.953455  333890 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1213 09:11:07.957554  333890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:11:07.968284  333890 kubeadm.go:884] updating cluster {Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:11:07.968419  333890 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:11:07.968476  333890 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:11:08.002674  333890 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:11:08.002700  333890 crio.go:433] Images already preloaded, skipping extraction
	I1213 09:11:08.002756  333890 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:11:08.028193  333890 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:11:08.028216  333890 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:11:08.028225  333890 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1213 09:11:08.028332  333890 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-379362 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:11:08.028403  333890 ssh_runner.go:195] Run: crio config
	I1213 09:11:08.074930  333890 cni.go:84] Creating CNI manager for ""
	I1213 09:11:08.074949  333890 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:08.074961  333890 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:11:08.074981  333890 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-379362 NodeName:embed-certs-379362 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:11:08.075100  333890 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-379362"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:11:08.075176  333890 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 09:11:08.083542  333890 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:11:08.083624  333890 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:11:08.091566  333890 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1213 09:11:08.104461  333890 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 09:11:08.117321  333890 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1213 09:11:08.130224  333890 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:11:08.134005  333890 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:11:08.144074  333890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:08.224481  333890 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:11:08.245774  333890 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362 for IP: 192.168.85.2
	I1213 09:11:08.245792  333890 certs.go:195] generating shared ca certs ...
	I1213 09:11:08.245810  333890 certs.go:227] acquiring lock for ca certs: {Name:mk80892d6e61f0acf7b97550b52b3341a040901c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:08.245989  333890 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key
	I1213 09:11:08.246048  333890 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key
	I1213 09:11:08.246059  333890 certs.go:257] generating profile certs ...
	I1213 09:11:08.246147  333890 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/client.key
	I1213 09:11:08.246205  333890 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/apiserver.key.814e7b8a
	I1213 09:11:08.246246  333890 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/proxy-client.key
	I1213 09:11:08.246349  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem (1338 bytes)
	W1213 09:11:08.246386  333890 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303_empty.pem, impossibly tiny 0 bytes
	I1213 09:11:08.246398  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:11:08.246422  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:11:08.246445  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:11:08.246474  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem (1675 bytes)
	I1213 09:11:08.246555  333890 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:11:08.247224  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:11:08.265750  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:11:08.284698  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:11:08.304326  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:11:08.329185  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1213 09:11:08.348060  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 09:11:08.365610  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:11:08.383456  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/embed-certs-379362/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 09:11:08.400955  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /usr/share/ca-certificates/93032.pem (1708 bytes)
	I1213 09:11:08.418539  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:11:08.436393  333890 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem --> /usr/share/ca-certificates/9303.pem (1338 bytes)
	I1213 09:11:08.454266  333890 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:11:08.466744  333890 ssh_runner.go:195] Run: openssl version
	I1213 09:11:08.473100  333890 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.480536  333890 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/93032.pem /etc/ssl/certs/93032.pem
	I1213 09:11:08.488383  333890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.492189  333890 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:37 /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.492239  333890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93032.pem
	I1213 09:11:08.529232  333890 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:11:08.537596  333890 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.545251  333890 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:11:08.552715  333890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.556579  333890 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.556629  333890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:08.600524  333890 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:11:08.608451  333890 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.616267  333890 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9303.pem /etc/ssl/certs/9303.pem
	I1213 09:11:08.624437  333890 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.628633  333890 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:37 /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.628687  333890 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9303.pem
	I1213 09:11:08.663783  333890 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:11:08.672093  333890 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:11:08.676012  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:11:08.714649  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:11:08.753817  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:11:08.802703  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:11:08.851736  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:11:08.921259  333890 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:11:08.977170  333890 kubeadm.go:401] StartCluster: {Name:embed-certs-379362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:embed-certs-379362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:08.977291  333890 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 09:11:08.977362  333890 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:11:09.015784  333890 cri.go:89] found id: "4bc6623c8d51e745a13ec1bbde3156fa4a6306b57cced07bc50b9433f54b52ab"
	I1213 09:11:09.015811  333890 cri.go:89] found id: "be5f00248e70cd8cdd3aaa3d5a1222e8bf8bbfab76393d6a5892e2e4c34a2a74"
	I1213 09:11:09.015818  333890 cri.go:89] found id: "9f6e183787c3b40e4c300978c57f6aef4eb0fabeae2452bf40c81a0b7a5f096a"
	I1213 09:11:09.015825  333890 cri.go:89] found id: "4aa683e93939933e0c046128e063e112508837dfd7e3b3f413f70d5bccf4c6da"
	I1213 09:11:09.015829  333890 cri.go:89] found id: ""
	I1213 09:11:09.015875  333890 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 09:11:09.030638  333890 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:09Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:11:09.030704  333890 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:11:09.039128  333890 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 09:11:09.039178  333890 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 09:11:09.039248  333890 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 09:11:09.047141  333890 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:11:09.048055  333890 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-379362" does not appear in /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:09.048563  333890 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-5776/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-379362" cluster setting kubeconfig missing "embed-certs-379362" context setting]
	I1213 09:11:09.049221  333890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:09.050957  333890 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 09:11:09.059934  333890 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1213 09:11:09.059966  333890 kubeadm.go:602] duration metric: took 20.780797ms to restartPrimaryControlPlane
	I1213 09:11:09.059975  333890 kubeadm.go:403] duration metric: took 82.814517ms to StartCluster
	I1213 09:11:09.059992  333890 settings.go:142] acquiring lock: {Name:mkb7db33d439ccf76620dab7051026432361ebb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:09.060056  333890 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:09.062377  333890 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:09.062685  333890 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:11:09.062757  333890 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 09:11:09.062848  333890 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-379362"
	I1213 09:11:09.062864  333890 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-379362"
	W1213 09:11:09.062872  333890 addons.go:248] addon storage-provisioner should already be in state true
	I1213 09:11:09.062901  333890 host.go:66] Checking if "embed-certs-379362" exists ...
	I1213 09:11:09.062909  333890 addons.go:70] Setting dashboard=true in profile "embed-certs-379362"
	I1213 09:11:09.062926  333890 addons.go:239] Setting addon dashboard=true in "embed-certs-379362"
	W1213 09:11:09.062935  333890 addons.go:248] addon dashboard should already be in state true
	I1213 09:11:09.062946  333890 config.go:182] Loaded profile config "embed-certs-379362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:09.062959  333890 host.go:66] Checking if "embed-certs-379362" exists ...
	I1213 09:11:09.062995  333890 addons.go:70] Setting default-storageclass=true in profile "embed-certs-379362"
	I1213 09:11:09.063010  333890 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-379362"
	I1213 09:11:09.063289  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.063415  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.063500  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.067611  333890 out.go:179] * Verifying Kubernetes components...
	I1213 09:11:09.069241  333890 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:09.089368  333890 addons.go:239] Setting addon default-storageclass=true in "embed-certs-379362"
	W1213 09:11:09.089396  333890 addons.go:248] addon default-storageclass should already be in state true
	I1213 09:11:09.089421  333890 host.go:66] Checking if "embed-certs-379362" exists ...
	I1213 09:11:09.089959  333890 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:09.091596  333890 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 09:11:09.091621  333890 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:11:09.094004  333890 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:11:09.094022  333890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 09:11:09.094036  333890 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 09:11:08.362204  328914 pod_ready.go:83] waiting for pod "kube-proxy-78nr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.762127  328914 pod_ready.go:94] pod "kube-proxy-78nr2" is "Ready"
	I1213 09:11:08.762159  328914 pod_ready.go:86] duration metric: took 399.931988ms for pod "kube-proxy-78nr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:08.963595  328914 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:09.362581  328914 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:09.362857  328914 pod_ready.go:86] duration metric: took 399.227137ms for pod "kube-scheduler-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:09.362881  328914 pod_ready.go:40] duration metric: took 1.60532416s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:09.427945  328914 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 09:11:09.429725  328914 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-361270" cluster and "default" namespace by default
	I1213 09:11:09.094083  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:09.094976  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 09:11:09.094990  333890 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 09:11:09.095048  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:09.122479  333890 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 09:11:09.122516  333890 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 09:11:09.122573  333890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:09.124934  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:09.126649  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:09.157673  333890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:09.240152  333890 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:11:09.256813  333890 node_ready.go:35] waiting up to 6m0s for node "embed-certs-379362" to be "Ready" ...
	I1213 09:11:09.266223  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 09:11:09.266249  333890 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 09:11:09.266409  333890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:11:09.280359  333890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 09:11:09.282762  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 09:11:09.282784  333890 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 09:11:09.306961  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 09:11:09.307019  333890 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 09:11:09.323015  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 09:11:09.323036  333890 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 09:11:09.339143  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 09:11:09.339166  333890 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 09:11:09.367621  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 09:11:09.367646  333890 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 09:11:09.382705  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 09:11:09.382728  333890 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 09:11:09.398185  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 09:11:09.398219  333890 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 09:11:09.414356  333890 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:11:09.414389  333890 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 09:11:09.430652  333890 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:11:10.622141  333890 node_ready.go:49] node "embed-certs-379362" is "Ready"
	I1213 09:11:10.622177  333890 node_ready.go:38] duration metric: took 1.365330808s for node "embed-certs-379362" to be "Ready" ...
	I1213 09:11:10.622194  333890 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:11:10.622248  333890 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:11:11.141921  333890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.875483061s)
	I1213 09:11:11.141933  333890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.861538443s)
	I1213 09:11:11.142098  333890 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.711411401s)
	I1213 09:11:11.142138  333890 api_server.go:72] duration metric: took 2.079421919s to wait for apiserver process to appear ...
	I1213 09:11:11.142151  333890 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:11:11.142170  333890 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 09:11:11.143945  333890 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-379362 addons enable metrics-server
	
	I1213 09:11:11.149734  333890 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:11:11.149761  333890 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:11:11.155576  333890 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 09:11:11.156748  333890 addons.go:530] duration metric: took 2.094000513s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1213 09:11:11.642554  333890 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 09:11:11.648040  333890 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:11:11.648073  333890 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:11:12.142953  333890 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1213 09:11:12.147533  333890 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1213 09:11:12.148602  333890 api_server.go:141] control plane version: v1.34.2
	I1213 09:11:12.148630  333890 api_server.go:131] duration metric: took 1.006470603s to wait for apiserver health ...
	I1213 09:11:12.148643  333890 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:11:12.152383  333890 system_pods.go:59] 8 kube-system pods found
	I1213 09:11:12.152411  333890 system_pods.go:61] "coredns-66bc5c9577-24vtj" [8986d496-b2cb-429d-80ec-2f326920e440] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:12.152418  333890 system_pods.go:61] "etcd-embed-certs-379362" [cfdea667-b08a-4d24-b7f4-0fe21dbc5388] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:11:12.152428  333890 system_pods.go:61] "kindnet-4vk4d" [23fa27ce-887f-4910-af8d-74b11ea2df32] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 09:11:12.152449  333890 system_pods.go:61] "kube-apiserver-embed-certs-379362" [24a409bb-590d-4ac2-9246-7dba3fc3f946] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:11:12.152462  333890 system_pods.go:61] "kube-controller-manager-embed-certs-379362" [77968fd1-b384-4df9-86bd-289d910ba778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:11:12.152469  333890 system_pods.go:61] "kube-proxy-zmtpb" [c6bfb114-7843-46f4-8244-db73b00b7e6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 09:11:12.152495  333890 system_pods.go:61] "kube-scheduler-embed-certs-379362" [eb180ea3-0cfe-44f4-a995-7612e63240ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:11:12.152526  333890 system_pods.go:61] "storage-provisioner" [937cc208-1949-4660-a328-292224786f1b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:12.152535  333890 system_pods.go:74] duration metric: took 3.881548ms to wait for pod list to return data ...
	I1213 09:11:12.152549  333890 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:11:12.155530  333890 default_sa.go:45] found service account: "default"
	I1213 09:11:12.155557  333890 default_sa.go:55] duration metric: took 3.001063ms for default service account to be created ...
	I1213 09:11:12.155568  333890 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 09:11:12.158432  333890 system_pods.go:86] 8 kube-system pods found
	I1213 09:11:12.158455  333890 system_pods.go:89] "coredns-66bc5c9577-24vtj" [8986d496-b2cb-429d-80ec-2f326920e440] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1213 09:11:12.158463  333890 system_pods.go:89] "etcd-embed-certs-379362" [cfdea667-b08a-4d24-b7f4-0fe21dbc5388] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:11:12.158470  333890 system_pods.go:89] "kindnet-4vk4d" [23fa27ce-887f-4910-af8d-74b11ea2df32] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1213 09:11:12.158476  333890 system_pods.go:89] "kube-apiserver-embed-certs-379362" [24a409bb-590d-4ac2-9246-7dba3fc3f946] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:11:12.158520  333890 system_pods.go:89] "kube-controller-manager-embed-certs-379362" [77968fd1-b384-4df9-86bd-289d910ba778] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:11:12.158534  333890 system_pods.go:89] "kube-proxy-zmtpb" [c6bfb114-7843-46f4-8244-db73b00b7e6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1213 09:11:12.158543  333890 system_pods.go:89] "kube-scheduler-embed-certs-379362" [eb180ea3-0cfe-44f4-a995-7612e63240ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:11:12.158551  333890 system_pods.go:89] "storage-provisioner" [937cc208-1949-4660-a328-292224786f1b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1213 09:11:12.158563  333890 system_pods.go:126] duration metric: took 2.988393ms to wait for k8s-apps to be running ...
	I1213 09:11:12.158571  333890 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 09:11:12.158615  333890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:12.172411  333890 system_svc.go:56] duration metric: took 13.834615ms WaitForService to wait for kubelet
	I1213 09:11:12.172438  333890 kubeadm.go:587] duration metric: took 3.109721475s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:11:12.172457  333890 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:11:12.175344  333890 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:11:12.175368  333890 node_conditions.go:123] node cpu capacity is 8
	I1213 09:11:12.175391  333890 node_conditions.go:105] duration metric: took 2.92165ms to run NodePressure ...
	I1213 09:11:12.175405  333890 start.go:242] waiting for startup goroutines ...
	I1213 09:11:12.175422  333890 start.go:247] waiting for cluster config update ...
	I1213 09:11:12.175436  333890 start.go:256] writing updated cluster config ...
	I1213 09:11:12.175704  333890 ssh_runner.go:195] Run: rm -f paused
	I1213 09:11:12.179850  333890 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:12.183357  333890 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-24vtj" in "kube-system" namespace to be "Ready" or be gone ...
	W1213 09:11:14.188818  333890 pod_ready.go:104] pod "coredns-66bc5c9577-24vtj" is not "Ready", error: <nil>
	W1213 09:11:16.189566  333890 pod_ready.go:104] pod "coredns-66bc5c9577-24vtj" is not "Ready", error: <nil>
	W1213 09:11:18.689697  333890 pod_ready.go:104] pod "coredns-66bc5c9577-24vtj" is not "Ready", error: <nil>
	W1213 09:11:20.690640  333890 pod_ready.go:104] pod "coredns-66bc5c9577-24vtj" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 13 09:10:45 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:45.904032511Z" level=info msg="Started container" PID=1737 containerID=0561bd6cf1b8762c87afce171bd0be1a1822139635822c7b29cd7c977c7e8a9b description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd/dashboard-metrics-scraper id=7baaca90-45af-4bd3-8bd6-08961f7f5c65 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dcf7c2f3e8b106ff04aeb344a3ab5775fd66feb42ed71c70c0ad3a1c402bcb6
	Dec 13 09:10:46 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:46.852452708Z" level=info msg="Removing container: a7354e50ac7317766710d2552e7522acecde83c588a2ab3a0e2f5c82931624a4" id=ff601e8a-5da4-4a94-b564-d4c658034ba7 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:10:46 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:46.862139643Z" level=info msg="Removed container a7354e50ac7317766710d2552e7522acecde83c588a2ab3a0e2f5c82931624a4: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd/dashboard-metrics-scraper" id=ff601e8a-5da4-4a94-b564-d4c658034ba7 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.878375416Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=54786de1-98a1-417a-836f-147824bff875 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.879281211Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ec231d73-d805-4dac-bf6a-7001c6a1e6fa name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.880295198Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7d6d004c-e5bd-4fd8-81a6-d9b87cffbd68 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.880423322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.88562017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.885787248Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1f88cc080f6576720f1efc3403a026d4cda8f18dc870a958e54fa3994c6e9585/merged/etc/passwd: no such file or directory"
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.885820257Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1f88cc080f6576720f1efc3403a026d4cda8f18dc870a958e54fa3994c6e9585/merged/etc/group: no such file or directory"
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.886123751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.916211118Z" level=info msg="Created container df4d7bf2845923fbd435913f93fe6d7454754cab6c72ec6ee2d93f963f342a80: kube-system/storage-provisioner/storage-provisioner" id=7d6d004c-e5bd-4fd8-81a6-d9b87cffbd68 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.916821187Z" level=info msg="Starting container: df4d7bf2845923fbd435913f93fe6d7454754cab6c72ec6ee2d93f963f342a80" id=28b7c4fd-b98b-443f-a0c5-5f6a80a83095 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:10:57 old-k8s-version-234538 crio[570]: time="2025-12-13T09:10:57.918637463Z" level=info msg="Started container" PID=1751 containerID=df4d7bf2845923fbd435913f93fe6d7454754cab6c72ec6ee2d93f963f342a80 description=kube-system/storage-provisioner/storage-provisioner id=28b7c4fd-b98b-443f-a0c5-5f6a80a83095 name=/runtime.v1.RuntimeService/StartContainer sandboxID=eec461f955bd63c9379b362686ddd5ee2d6eb9ff9d34a53d030714fe5093bd7a
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.745062005Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5d8c9942-da22-4f03-9078-1d4a3613decd name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.746175245Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4d84393f-b8a2-47c0-a698-c937252cb428 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.747282918Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd/dashboard-metrics-scraper" id=e91394f8-52ec-4f39-9436-5b3d5bd3c3b4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.747408641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.754983295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.755702822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.787780086Z" level=info msg="Created container a4fcaf3a6c74bd4a32be169b0f31cc448ee3af614c8b43b9b429453191a59f1b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd/dashboard-metrics-scraper" id=e91394f8-52ec-4f39-9436-5b3d5bd3c3b4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.788620289Z" level=info msg="Starting container: a4fcaf3a6c74bd4a32be169b0f31cc448ee3af614c8b43b9b429453191a59f1b" id=29dd111e-1e9f-4417-92a3-fb71831e80b3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.79044667Z" level=info msg="Started container" PID=1767 containerID=a4fcaf3a6c74bd4a32be169b0f31cc448ee3af614c8b43b9b429453191a59f1b description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd/dashboard-metrics-scraper id=29dd111e-1e9f-4417-92a3-fb71831e80b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dcf7c2f3e8b106ff04aeb344a3ab5775fd66feb42ed71c70c0ad3a1c402bcb6
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.893140212Z" level=info msg="Removing container: 0561bd6cf1b8762c87afce171bd0be1a1822139635822c7b29cd7c977c7e8a9b" id=da4469bb-7576-48cb-b974-b1e73a00d340 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:11:01 old-k8s-version-234538 crio[570]: time="2025-12-13T09:11:01.904216433Z" level=info msg="Removed container 0561bd6cf1b8762c87afce171bd0be1a1822139635822c7b29cd7c977c7e8a9b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd/dashboard-metrics-scraper" id=da4469bb-7576-48cb-b974-b1e73a00d340 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	a4fcaf3a6c74b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   3dcf7c2f3e8b1       dashboard-metrics-scraper-5f989dc9cf-l8kcd       kubernetes-dashboard
	df4d7bf284592       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   eec461f955bd6       storage-provisioner                              kube-system
	f28fd60687b20       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago       Running             kubernetes-dashboard        0                   9eedd4de5e872       kubernetes-dashboard-8694d4445c-jr9d8            kubernetes-dashboard
	4d79dc4e2f903       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   6526536ea7fe5       busybox                                          default
	08a51937a4354       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           58 seconds ago       Running             coredns                     0                   9955cf104081b       coredns-5dd5756b68-g66tb                         kube-system
	736464be00c6b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   eec461f955bd6       storage-provisioner                              kube-system
	c1e919ad40225       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   e7ec864ecbe6e       kindnet-9hllk                                    kube-system
	6d4755f502135       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           58 seconds ago       Running             kube-proxy                  0                   64fd5834b8e0b       kube-proxy-6bkvj                                 kube-system
	0cc4f5e85cb5d       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   d7a385e97242b       kube-apiserver-old-k8s-version-234538            kube-system
	ccfc11a0ddb83       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   d56041d4a2382       etcd-old-k8s-version-234538                      kube-system
	e2292eb60503a       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   ed878c1443255       kube-scheduler-old-k8s-version-234538            kube-system
	b6d10fbd863a8       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   f376c1694fa19       kube-controller-manager-old-k8s-version-234538   kube-system
	
	
	==> coredns [08a51937a4354c4bd30265f581946e42c57640c60d744db906265da68f2b4db2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45309 - 35110 "HINFO IN 8291808643669323414.2503676043295594618. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026324045s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-234538
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-234538
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=old-k8s-version-234538
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_09_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:09:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-234538
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:11:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:10:57 +0000   Sat, 13 Dec 2025 09:09:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:10:57 +0000   Sat, 13 Dec 2025 09:09:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:10:57 +0000   Sat, 13 Dec 2025 09:09:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:10:57 +0000   Sat, 13 Dec 2025 09:09:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-234538
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                b58ade41-ef0c-4ef7-817f-5090fbbdf23c
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-g66tb                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-old-k8s-version-234538                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m3s
	  kube-system                 kindnet-9hllk                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-234538             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-234538    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-6bkvj                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-234538             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-l8kcd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-jr9d8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 110s                   kube-proxy       
	  Normal  Starting                 58s                    kube-proxy       
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-234538 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-234538 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-234538 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m3s                   kubelet          Node old-k8s-version-234538 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m3s                   kubelet          Node old-k8s-version-234538 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m3s                   kubelet          Node old-k8s-version-234538 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                   node-controller  Node old-k8s-version-234538 event: Registered Node old-k8s-version-234538 in Controller
	  Normal  NodeReady                98s                    kubelet          Node old-k8s-version-234538 status is now: NodeReady
	  Normal  Starting                 62s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node old-k8s-version-234538 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node old-k8s-version-234538 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node old-k8s-version-234538 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                    node-controller  Node old-k8s-version-234538 event: Registered Node old-k8s-version-234538 in Controller
	
	
	==> dmesg <==
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[Dec13 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 92 30 31 19 d5 08 06
	[  +0.000699] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 37 88 97 25 dd 08 06
	[ +15.632521] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[ +20.216508] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 f7 62 38 81 f0 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[  +0.111008] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 c8 f7 8d 1a 5f 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[ +12.586108] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de cc 9f d9 2d 23 08 06
	[  +0.043792] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	[Dec13 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 62 87 e9 6b d3 37 08 06
	[  +0.000424] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	
	
	==> etcd [ccfc11a0ddb8317d8e1609f9778d0755dc87dac089178550d5aa53b7a0853424] <==
	{"level":"info","ts":"2025-12-13T09:10:24.598805Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-13T09:10:24.601512Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-13T09:10:25.344595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-13T09:10:25.34465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-13T09:10:25.34469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-13T09:10:25.344709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-13T09:10:25.344717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-13T09:10:25.344729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-13T09:10:25.344739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-13T09:10:25.345997Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T09:10:25.346117Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-13T09:10:25.347437Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-13T09:10:25.347437Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-13T09:10:25.345995Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-234538 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-13T09:10:25.350303Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-13T09:10:25.350339Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-13T09:10:32.665969Z","caller":"traceutil/trace.go:171","msg":"trace[957178955] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"110.042831ms","start":"2025-12-13T09:10:32.555904Z","end":"2025-12-13T09:10:32.665947Z","steps":["trace[957178955] 'process raft request'  (duration: 109.865345ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:10:32.793456Z","caller":"traceutil/trace.go:171","msg":"trace[285308251] transaction","detail":"{read_only:false; response_revision:500; number_of_response:1; }","duration":"123.478528ms","start":"2025-12-13T09:10:32.66995Z","end":"2025-12-13T09:10:32.793428Z","steps":["trace[285308251] 'process raft request'  (duration: 112.425856ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:10:33.018101Z","caller":"traceutil/trace.go:171","msg":"trace[143662783] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"134.069463ms","start":"2025-12-13T09:10:32.884014Z","end":"2025-12-13T09:10:33.018084Z","steps":["trace[143662783] 'process raft request'  (duration: 133.942358ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:10:33.234427Z","caller":"traceutil/trace.go:171","msg":"trace[136283724] transaction","detail":"{read_only:false; response_revision:508; number_of_response:1; }","duration":"120.201008ms","start":"2025-12-13T09:10:33.114201Z","end":"2025-12-13T09:10:33.234402Z","steps":["trace[136283724] 'process raft request'  (duration: 100.369069ms)","trace[136283724] 'compare'  (duration: 19.70829ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T09:10:33.37955Z","caller":"traceutil/trace.go:171","msg":"trace[1484540722] transaction","detail":"{read_only:false; response_revision:510; number_of_response:1; }","duration":"101.766513ms","start":"2025-12-13T09:10:33.277756Z","end":"2025-12-13T09:10:33.379522Z","steps":["trace[1484540722] 'process raft request'  (duration: 69.529081ms)","trace[1484540722] 'compare'  (duration: 32.06527ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-13T09:10:33.525571Z","caller":"traceutil/trace.go:171","msg":"trace[1673762831] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"141.675757ms","start":"2025-12-13T09:10:33.383877Z","end":"2025-12-13T09:10:33.525553Z","steps":["trace[1673762831] 'process raft request'  (duration: 141.359711ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-13T09:10:33.64567Z","caller":"traceutil/trace.go:171","msg":"trace[245186077] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"114.407535ms","start":"2025-12-13T09:10:33.53124Z","end":"2025-12-13T09:10:33.645648Z","steps":["trace[245186077] 'process raft request'  (duration: 100.160208ms)","trace[245186077] 'compare'  (duration: 14.130867ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-13T09:10:33.917335Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.719125ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357279072198348 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/old-k8s-version-234538.1880bb5414c68276\" mod_revision:512 > success:<request_put:<key:\"/registry/events/default/old-k8s-version-234538.1880bb5414c68276\" value_size:658 lease:6414985242217422505 >> failure:<request_range:<key:\"/registry/events/default/old-k8s-version-234538.1880bb5414c68276\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-13T09:10:33.917751Z","caller":"traceutil/trace.go:171","msg":"trace[1239134521] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"205.961441ms","start":"2025-12-13T09:10:33.711759Z","end":"2025-12-13T09:10:33.917721Z","steps":["trace[1239134521] 'process raft request'  (duration: 88.278296ms)","trace[1239134521] 'compare'  (duration: 116.584159ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:11:25 up 53 min,  0 user,  load average: 3.78, 3.49, 2.36
	Linux old-k8s-version-234538 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c1e919ad40225b12a51ff83b8b1ab06eb950ed45acc023817130b2bbb115503a] <==
	I1213 09:10:27.352935       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 09:10:27.353761       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1213 09:10:27.353984       1 main.go:148] setting mtu 1500 for CNI 
	I1213 09:10:27.354016       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 09:10:27.354038       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T09:10:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 09:10:27.557794       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 09:10:27.653720       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 09:10:27.653765       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 09:10:27.653992       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 09:10:27.858363       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 09:10:27.858391       1 metrics.go:72] Registering metrics
	I1213 09:10:27.858438       1 controller.go:711] "Syncing nftables rules"
	I1213 09:10:37.565593       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 09:10:37.565648       1 main.go:301] handling current node
	I1213 09:10:47.558230       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 09:10:47.558264       1 main.go:301] handling current node
	I1213 09:10:57.566609       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 09:10:57.566639       1 main.go:301] handling current node
	I1213 09:11:07.557827       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 09:11:07.557883       1 main.go:301] handling current node
	I1213 09:11:17.565285       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1213 09:11:17.565327       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0cc4f5e85cb5d4e6d07eeb129540177624ba0b7b05e38e98203ef68cb53670db] <==
	I1213 09:10:26.520075       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1213 09:10:26.584372       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:10:26.619842       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1213 09:10:26.619857       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 09:10:26.619888       1 shared_informer.go:318] Caches are synced for configmaps
	I1213 09:10:26.620101       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1213 09:10:26.620123       1 aggregator.go:166] initial CRD sync complete...
	I1213 09:10:26.620128       1 autoregister_controller.go:141] Starting autoregister controller
	I1213 09:10:26.620134       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 09:10:26.620139       1 cache.go:39] Caches are synced for autoregister controller
	I1213 09:10:26.620560       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1213 09:10:26.620775       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1213 09:10:26.620835       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1213 09:10:26.632436       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1213 09:10:27.525057       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 09:10:27.642827       1 controller.go:624] quota admission added evaluator for: namespaces
	I1213 09:10:27.693810       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1213 09:10:27.720348       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:10:27.733383       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:10:27.746792       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1213 09:10:27.793042       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.70.55"}
	I1213 09:10:27.812130       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.178.132"}
	I1213 09:10:38.989985       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1213 09:10:39.004559       1 controller.go:624] quota admission added evaluator for: endpoints
	I1213 09:10:39.019818       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b6d10fbd863a8a81004a0b20dba55d8b74f364e15f804329d14979332876f75a] <==
	I1213 09:10:39.043922       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1213 09:10:39.043986       1 taint_manager.go:211] "Sending events to api server"
	I1213 09:10:39.044204       1 event.go:307] "Event occurred" object="old-k8s-version-234538" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-234538 event: Registered Node old-k8s-version-234538 in Controller"
	I1213 09:10:39.048703       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.749316ms"
	I1213 09:10:39.049340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="77.366µs"
	I1213 09:10:39.049531       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="108.324µs"
	I1213 09:10:39.054524       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="15.725072ms"
	I1213 09:10:39.054669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.532µs"
	I1213 09:10:39.058613       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.93µs"
	I1213 09:10:39.077846       1 shared_informer.go:318] Caches are synced for disruption
	I1213 09:10:39.149383       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1213 09:10:39.167128       1 shared_informer.go:318] Caches are synced for resource quota
	I1213 09:10:39.232404       1 shared_informer.go:318] Caches are synced for resource quota
	I1213 09:10:39.553619       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 09:10:39.626203       1 shared_informer.go:318] Caches are synced for garbage collector
	I1213 09:10:39.626252       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1213 09:10:42.864159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.486312ms"
	I1213 09:10:42.864318       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.889µs"
	I1213 09:10:45.856533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.783µs"
	I1213 09:10:46.863897       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="96.778µs"
	I1213 09:10:47.865041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="142.87µs"
	I1213 09:11:01.903578       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="140.16µs"
	I1213 09:11:07.191772       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.893039ms"
	I1213 09:11:07.191989       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="138.045µs"
	I1213 09:11:09.354955       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.613µs"
	
	
	==> kube-proxy [6d4755f502135ef94a11ec27217a6459bc85937c42dc06e9a1e638df610779fb] <==
	I1213 09:10:27.160617       1 server_others.go:69] "Using iptables proxy"
	I1213 09:10:27.171062       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1213 09:10:27.190048       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 09:10:27.192527       1 server_others.go:152] "Using iptables Proxier"
	I1213 09:10:27.192566       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1213 09:10:27.192574       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1213 09:10:27.192615       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1213 09:10:27.192921       1 server.go:846] "Version info" version="v1.28.0"
	I1213 09:10:27.193035       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:10:27.193876       1 config.go:315] "Starting node config controller"
	I1213 09:10:27.193936       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1213 09:10:27.194827       1 config.go:188] "Starting service config controller"
	I1213 09:10:27.194895       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1213 09:10:27.195031       1 config.go:97] "Starting endpoint slice config controller"
	I1213 09:10:27.195104       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1213 09:10:27.294162       1 shared_informer.go:318] Caches are synced for node config
	I1213 09:10:27.295673       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1213 09:10:27.295681       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [e2292eb60503a271d5b03a7e7a8cf528dea0e07edd89ce5c55a81bf4b0c2b310] <==
	I1213 09:10:25.248402       1 serving.go:348] Generated self-signed cert in-memory
	W1213 09:10:26.542354       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 09:10:26.542393       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 09:10:26.542409       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 09:10:26.542420       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 09:10:26.557586       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1213 09:10:26.558627       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:10:26.560698       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:10:26.560774       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1213 09:10:26.565044       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1213 09:10:26.565147       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1213 09:10:26.574256       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1213 09:10:26.574314       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1213 09:10:28.161100       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 13 09:10:39 old-k8s-version-234538 kubelet[721]: I1213 09:10:39.141352     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9751c53e-7c8a-44eb-b1b0-bff398385c78-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-jr9d8\" (UID: \"9751c53e-7c8a-44eb-b1b0-bff398385c78\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jr9d8"
	Dec 13 09:10:39 old-k8s-version-234538 kubelet[721]: I1213 09:10:39.141425     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/630854c3-c982-45a0-9ded-c90136790884-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-l8kcd\" (UID: \"630854c3-c982-45a0-9ded-c90136790884\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd"
	Dec 13 09:10:39 old-k8s-version-234538 kubelet[721]: I1213 09:10:39.141583     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r945v\" (UniqueName: \"kubernetes.io/projected/630854c3-c982-45a0-9ded-c90136790884-kube-api-access-r945v\") pod \"dashboard-metrics-scraper-5f989dc9cf-l8kcd\" (UID: \"630854c3-c982-45a0-9ded-c90136790884\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd"
	Dec 13 09:10:39 old-k8s-version-234538 kubelet[721]: I1213 09:10:39.141631     721 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7hrb\" (UniqueName: \"kubernetes.io/projected/9751c53e-7c8a-44eb-b1b0-bff398385c78-kube-api-access-t7hrb\") pod \"kubernetes-dashboard-8694d4445c-jr9d8\" (UID: \"9751c53e-7c8a-44eb-b1b0-bff398385c78\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jr9d8"
	Dec 13 09:10:42 old-k8s-version-234538 kubelet[721]: I1213 09:10:42.854822     721 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jr9d8" podStartSLOduration=0.860508944 podCreationTimestamp="2025-12-13 09:10:39 +0000 UTC" firstStartedPulling="2025-12-13 09:10:39.361248113 +0000 UTC m=+15.741640298" lastFinishedPulling="2025-12-13 09:10:42.355500508 +0000 UTC m=+18.735892704" observedRunningTime="2025-12-13 09:10:42.854543937 +0000 UTC m=+19.234936137" watchObservedRunningTime="2025-12-13 09:10:42.85476135 +0000 UTC m=+19.235153548"
	Dec 13 09:10:45 old-k8s-version-234538 kubelet[721]: I1213 09:10:45.844093     721 scope.go:117] "RemoveContainer" containerID="a7354e50ac7317766710d2552e7522acecde83c588a2ab3a0e2f5c82931624a4"
	Dec 13 09:10:46 old-k8s-version-234538 kubelet[721]: I1213 09:10:46.850895     721 scope.go:117] "RemoveContainer" containerID="a7354e50ac7317766710d2552e7522acecde83c588a2ab3a0e2f5c82931624a4"
	Dec 13 09:10:46 old-k8s-version-234538 kubelet[721]: I1213 09:10:46.851092     721 scope.go:117] "RemoveContainer" containerID="0561bd6cf1b8762c87afce171bd0be1a1822139635822c7b29cd7c977c7e8a9b"
	Dec 13 09:10:46 old-k8s-version-234538 kubelet[721]: E1213 09:10:46.851497     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l8kcd_kubernetes-dashboard(630854c3-c982-45a0-9ded-c90136790884)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd" podUID="630854c3-c982-45a0-9ded-c90136790884"
	Dec 13 09:10:47 old-k8s-version-234538 kubelet[721]: I1213 09:10:47.855167     721 scope.go:117] "RemoveContainer" containerID="0561bd6cf1b8762c87afce171bd0be1a1822139635822c7b29cd7c977c7e8a9b"
	Dec 13 09:10:47 old-k8s-version-234538 kubelet[721]: E1213 09:10:47.855542     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l8kcd_kubernetes-dashboard(630854c3-c982-45a0-9ded-c90136790884)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd" podUID="630854c3-c982-45a0-9ded-c90136790884"
	Dec 13 09:10:49 old-k8s-version-234538 kubelet[721]: I1213 09:10:49.339236     721 scope.go:117] "RemoveContainer" containerID="0561bd6cf1b8762c87afce171bd0be1a1822139635822c7b29cd7c977c7e8a9b"
	Dec 13 09:10:49 old-k8s-version-234538 kubelet[721]: E1213 09:10:49.339551     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l8kcd_kubernetes-dashboard(630854c3-c982-45a0-9ded-c90136790884)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd" podUID="630854c3-c982-45a0-9ded-c90136790884"
	Dec 13 09:10:57 old-k8s-version-234538 kubelet[721]: I1213 09:10:57.877834     721 scope.go:117] "RemoveContainer" containerID="736464be00c6bd519200cf950d8d2522dd6f74b3dccbd17d0288278dc3d5bd05"
	Dec 13 09:11:01 old-k8s-version-234538 kubelet[721]: I1213 09:11:01.744404     721 scope.go:117] "RemoveContainer" containerID="0561bd6cf1b8762c87afce171bd0be1a1822139635822c7b29cd7c977c7e8a9b"
	Dec 13 09:11:01 old-k8s-version-234538 kubelet[721]: I1213 09:11:01.891636     721 scope.go:117] "RemoveContainer" containerID="0561bd6cf1b8762c87afce171bd0be1a1822139635822c7b29cd7c977c7e8a9b"
	Dec 13 09:11:01 old-k8s-version-234538 kubelet[721]: I1213 09:11:01.891902     721 scope.go:117] "RemoveContainer" containerID="a4fcaf3a6c74bd4a32be169b0f31cc448ee3af614c8b43b9b429453191a59f1b"
	Dec 13 09:11:01 old-k8s-version-234538 kubelet[721]: E1213 09:11:01.892293     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l8kcd_kubernetes-dashboard(630854c3-c982-45a0-9ded-c90136790884)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd" podUID="630854c3-c982-45a0-9ded-c90136790884"
	Dec 13 09:11:09 old-k8s-version-234538 kubelet[721]: I1213 09:11:09.339353     721 scope.go:117] "RemoveContainer" containerID="a4fcaf3a6c74bd4a32be169b0f31cc448ee3af614c8b43b9b429453191a59f1b"
	Dec 13 09:11:09 old-k8s-version-234538 kubelet[721]: E1213 09:11:09.339790     721 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l8kcd_kubernetes-dashboard(630854c3-c982-45a0-9ded-c90136790884)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l8kcd" podUID="630854c3-c982-45a0-9ded-c90136790884"
	Dec 13 09:11:21 old-k8s-version-234538 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 09:11:21 old-k8s-version-234538 kubelet[721]: I1213 09:11:21.144864     721 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 13 09:11:21 old-k8s-version-234538 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 09:11:21 old-k8s-version-234538 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:11:21 old-k8s-version-234538 systemd[1]: kubelet.service: Consumed 1.659s CPU time.
	
	
	==> kubernetes-dashboard [f28fd60687b20846a254002fb7cb4119ba9d01d31a94e6c8eec0afa665e5faa0] <==
	2025/12/13 09:10:42 Starting overwatch
	2025/12/13 09:10:42 Using namespace: kubernetes-dashboard
	2025/12/13 09:10:42 Using in-cluster config to connect to apiserver
	2025/12/13 09:10:42 Using secret token for csrf signing
	2025/12/13 09:10:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 09:10:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 09:10:42 Successful initial request to the apiserver, version: v1.28.0
	2025/12/13 09:10:42 Generating JWE encryption key
	2025/12/13 09:10:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 09:10:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 09:10:42 Initializing JWE encryption key from synchronized object
	2025/12/13 09:10:42 Creating in-cluster Sidecar client
	2025/12/13 09:10:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 09:10:42 Serving insecurely on HTTP port: 9090
	2025/12/13 09:11:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [736464be00c6bd519200cf950d8d2522dd6f74b3dccbd17d0288278dc3d5bd05] <==
	I1213 09:10:27.142691       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 09:10:57.146921       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [df4d7bf2845923fbd435913f93fe6d7454754cab6c72ec6ee2d93f963f342a80] <==
	I1213 09:10:57.931974       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:10:57.940138       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:10:57.940197       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 09:11:15.339615       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 09:11:15.339749       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"60b54cd0-ddd8-481a-8123-7f67477a3495", APIVersion:"v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-234538_118a40aa-3036-4a78-97b2-c632df866bd8 became leader
	I1213 09:11:15.339798       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-234538_118a40aa-3036-4a78-97b2-c632df866bd8!
	I1213 09:11:15.440502       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-234538_118a40aa-3036-4a78-97b2-c632df866bd8!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-234538 -n old-k8s-version-234538
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-234538 -n old-k8s-version-234538: exit status 2 (351.704857ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-234538 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-966117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-966117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (247.88646ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-966117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-966117
helpers_test.go:244: (dbg) docker inspect newest-cni-966117:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680",
	        "Created": "2025-12-13T09:11:30.834080461Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 342286,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:11:30.870719894Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680/hostname",
	        "HostsPath": "/var/lib/docker/containers/bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680/hosts",
	        "LogPath": "/var/lib/docker/containers/bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680/bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680-json.log",
	        "Name": "/newest-cni-966117",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-966117:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-966117",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680",
	                "LowerDir": "/var/lib/docker/overlay2/2fc71a6257ef0b4ec8a2db8a60ba6034bd2a1e0c36a1f8de9a430a2234a41dd0-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fc71a6257ef0b4ec8a2db8a60ba6034bd2a1e0c36a1f8de9a430a2234a41dd0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fc71a6257ef0b4ec8a2db8a60ba6034bd2a1e0c36a1f8de9a430a2234a41dd0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fc71a6257ef0b4ec8a2db8a60ba6034bd2a1e0c36a1f8de9a430a2234a41dd0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-966117",
	                "Source": "/var/lib/docker/volumes/newest-cni-966117/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-966117",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-966117",
	                "name.minikube.sigs.k8s.io": "newest-cni-966117",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1dd0f6f141ab8e9b721ea6f05578a7c61a41d3c1059b47e92ac05d398f33f185",
	            "SandboxKey": "/var/run/docker/netns/1dd0f6f141ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-966117": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b0e850a31badefb3e90f169f6c30ed87a36474bdd831092f642a334450d6990",
	                    "EndpointID": "4c7c7b6b3a6b239baf7c56874c7e91fb26ef1fde44642d47a9a60b9976ef68fa",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "e2:0b:ee:30:14:c8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-966117",
	                        "bebeb5c4da8e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-966117 -n newest-cni-966117
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-966117 logs -n 25
E1213 09:11:49.200607    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/kindnet-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p no-preload-291522 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p no-preload-291522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-234538 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p old-k8s-version-234538 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0        │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p kubernetes-upgrade-814560                                                                                                                                                                                                                         │ kubernetes-upgrade-814560    │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ delete  │ -p disable-driver-mounts-779931                                                                                                                                                                                                                      │ disable-driver-mounts-779931 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable metrics-server -p embed-certs-379362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │                     │
	│ stop    │ -p embed-certs-379362 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p embed-certs-379362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ image   │ no-preload-291522 image list --format=json                                                                                                                                                                                                           │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p no-preload-291522 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-361270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ image   │ old-k8s-version-234538 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ stop    │ -p default-k8s-diff-port-361270 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p old-k8s-version-234538 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ delete  │ -p no-preload-291522                                                                                                                                                                                                                                 │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p no-preload-291522                                                                                                                                                                                                                                 │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p newest-cni-966117 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p old-k8s-version-234538                                                                                                                                                                                                                            │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p old-k8s-version-234538                                                                                                                                                                                                                            │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-361270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-966117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:11:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:11:37.072991  344087 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:37.073236  344087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:37.073245  344087 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:37.073250  344087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:37.073442  344087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:11:37.073891  344087 out.go:368] Setting JSON to false
	I1213 09:11:37.074975  344087 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3249,"bootTime":1765613848,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:11:37.075035  344087 start.go:143] virtualization: kvm guest
	I1213 09:11:37.076761  344087 out.go:179] * [default-k8s-diff-port-361270] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:11:37.078036  344087 notify.go:221] Checking for updates...
	I1213 09:11:37.078110  344087 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:11:37.079850  344087 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:11:37.081128  344087 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:37.082331  344087 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:11:37.083513  344087 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:11:37.084675  344087 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:11:37.086194  344087 config.go:182] Loaded profile config "default-k8s-diff-port-361270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:37.086769  344087 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:11:37.113130  344087 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 09:11:37.113223  344087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:37.175716  344087 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-13 09:11:37.160781191 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:37.175868  344087 docker.go:319] overlay module found
	I1213 09:11:37.181691  344087 out.go:179] * Using the docker driver based on existing profile
	I1213 09:11:37.183541  344087 start.go:309] selected driver: docker
	I1213 09:11:37.183575  344087 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-361270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-361270 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:37.183752  344087 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:11:37.184570  344087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:37.240167  344087 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-13 09:11:37.23088559 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:37.240416  344087 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 09:11:37.240440  344087 cni.go:84] Creating CNI manager for ""
	I1213 09:11:37.240511  344087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:37.240552  344087 start.go:353] cluster config:
	{Name:default-k8s-diff-port-361270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-361270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:37.242286  344087 out.go:179] * Starting "default-k8s-diff-port-361270" primary control-plane node in "default-k8s-diff-port-361270" cluster
	I1213 09:11:37.243314  344087 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 09:11:37.244534  344087 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:11:37.245565  344087 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:11:37.245618  344087 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1213 09:11:37.245632  344087 cache.go:65] Caching tarball of preloaded images
	I1213 09:11:37.245667  344087 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:11:37.245721  344087 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:11:37.245745  344087 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1213 09:11:37.245865  344087 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/config.json ...
	I1213 09:11:37.266571  344087 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:11:37.266590  344087 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:11:37.266604  344087 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:11:37.266631  344087 start.go:360] acquireMachinesLock for default-k8s-diff-port-361270: {Name:mk449517ae35c4f56ad4dd7a617f6d17b6cb11de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:11:37.266687  344087 start.go:364] duration metric: took 38.929µs to acquireMachinesLock for "default-k8s-diff-port-361270"
	I1213 09:11:37.266706  344087 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:11:37.266710  344087 fix.go:54] fixHost starting: 
	I1213 09:11:37.266898  344087 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-361270 --format={{.State.Status}}
	I1213 09:11:37.284610  344087 fix.go:112] recreateIfNeeded on default-k8s-diff-port-361270: state=Stopped err=<nil>
	W1213 09:11:37.284638  344087 fix.go:138] unexpected machine state, will restart: <nil>
	I1213 09:11:36.700911  341374 out.go:252]   - Booting up control plane ...
	I1213 09:11:36.701012  341374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 09:11:36.701111  341374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 09:11:36.702541  341374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 09:11:36.716666  341374 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 09:11:36.716832  341374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1213 09:11:36.724590  341374 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1213 09:11:36.724887  341374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 09:11:36.724941  341374 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1213 09:11:36.837357  341374 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 09:11:36.837552  341374 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 09:11:37.338721  341374 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.576003ms
	I1213 09:11:37.341605  341374 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1213 09:11:37.341743  341374 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1213 09:11:37.341866  341374 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1213 09:11:37.341956  341374 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1213 09:11:38.346538  341374 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004690139s
	I1213 09:11:39.695442  341374 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.353554962s
	I1213 09:11:41.344084  341374 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002403981s
	I1213 09:11:41.362542  341374 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 09:11:41.373191  341374 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 09:11:41.382397  341374 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 09:11:41.382675  341374 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-966117 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 09:11:41.392004  341374 kubeadm.go:319] [bootstrap-token] Using token: xb5zx2.wpbcyiswxqvt39qv
	I1213 09:11:41.393829  341374 out.go:252]   - Configuring RBAC rules ...
	I1213 09:11:41.393979  341374 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 09:11:41.398063  341374 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 09:11:41.403756  341374 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 09:11:41.406602  341374 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 09:11:41.409081  341374 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 09:11:41.412576  341374 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	W1213 09:11:38.688710  333890 pod_ready.go:104] pod "coredns-66bc5c9577-24vtj" is not "Ready", error: <nil>
	W1213 09:11:40.689088  333890 pod_ready.go:104] pod "coredns-66bc5c9577-24vtj" is not "Ready", error: <nil>
	I1213 09:11:37.286246  344087 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-361270" ...
	I1213 09:11:37.286331  344087 cli_runner.go:164] Run: docker start default-k8s-diff-port-361270
	I1213 09:11:37.524622  344087 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-361270 --format={{.State.Status}}
	I1213 09:11:37.543998  344087 kic.go:430] container "default-k8s-diff-port-361270" state is running.
	I1213 09:11:37.544419  344087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-361270
	I1213 09:11:37.563465  344087 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/config.json ...
	I1213 09:11:37.563878  344087 machine.go:94] provisionDockerMachine start ...
	I1213 09:11:37.563946  344087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:11:37.582588  344087 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:37.582819  344087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1213 09:11:37.582830  344087 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:11:37.583439  344087 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51890->127.0.0.1:33133: read: connection reset by peer
	I1213 09:11:40.721280  344087 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-361270
	
	I1213 09:11:40.721308  344087 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-361270"
	I1213 09:11:40.721374  344087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:11:40.742503  344087 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:40.742735  344087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1213 09:11:40.742750  344087 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-361270 && echo "default-k8s-diff-port-361270" | sudo tee /etc/hostname
	I1213 09:11:40.896736  344087 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-361270
	
	I1213 09:11:40.896831  344087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:11:40.919176  344087 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:40.919449  344087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1213 09:11:40.919500  344087 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-361270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-361270/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-361270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:11:41.061377  344087 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:11:41.061410  344087 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-5776/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-5776/.minikube}
	I1213 09:11:41.061434  344087 ubuntu.go:190] setting up certificates
	I1213 09:11:41.061446  344087 provision.go:84] configureAuth start
	I1213 09:11:41.061547  344087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-361270
	I1213 09:11:41.090671  344087 provision.go:143] copyHostCerts
	I1213 09:11:41.090760  344087 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem, removing ...
	I1213 09:11:41.090779  344087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem
	I1213 09:11:41.090878  344087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem (1123 bytes)
	I1213 09:11:41.091070  344087 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem, removing ...
	I1213 09:11:41.091084  344087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem
	I1213 09:11:41.091118  344087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem (1675 bytes)
	I1213 09:11:41.091181  344087 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem, removing ...
	I1213 09:11:41.091188  344087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem
	I1213 09:11:41.091231  344087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem (1082 bytes)
	I1213 09:11:41.091280  344087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-361270 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-361270 localhost minikube]
	I1213 09:11:41.203145  344087 provision.go:177] copyRemoteCerts
	I1213 09:11:41.203207  344087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:11:41.203246  344087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:11:41.225285  344087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/default-k8s-diff-port-361270/id_rsa Username:docker}
	I1213 09:11:41.324095  344087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:11:41.341758  344087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1213 09:11:41.360031  344087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 09:11:41.380124  344087 provision.go:87] duration metric: took 318.664878ms to configureAuth
	I1213 09:11:41.380151  344087 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:11:41.380379  344087 config.go:182] Loaded profile config "default-k8s-diff-port-361270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:41.380501  344087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:11:41.403684  344087 main.go:143] libmachine: Using SSH client type: native
	I1213 09:11:41.403966  344087 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1213 09:11:41.403998  344087 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 09:11:41.746458  344087 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 09:11:41.746498  344087 machine.go:97] duration metric: took 4.182590979s to provisionDockerMachine
	I1213 09:11:41.746514  344087 start.go:293] postStartSetup for "default-k8s-diff-port-361270" (driver="docker")
	I1213 09:11:41.746525  344087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:11:41.746629  344087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:11:41.746692  344087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:11:41.769593  344087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/default-k8s-diff-port-361270/id_rsa Username:docker}
	I1213 09:11:41.869097  344087 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:11:41.872979  344087 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:11:41.873005  344087 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:11:41.873016  344087 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/addons for local assets ...
	I1213 09:11:41.873058  344087 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/files for local assets ...
	I1213 09:11:41.873126  344087 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem -> 93032.pem in /etc/ssl/certs
	I1213 09:11:41.873221  344087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:11:41.881181  344087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:11:41.900167  344087 start.go:296] duration metric: took 153.638025ms for postStartSetup
	I1213 09:11:41.900260  344087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:11:41.900308  344087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:11:41.924155  344087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/default-k8s-diff-port-361270/id_rsa Username:docker}
	I1213 09:11:42.026823  344087 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:11:42.031351  344087 fix.go:56] duration metric: took 4.764634297s for fixHost
	I1213 09:11:42.031378  344087 start.go:83] releasing machines lock for "default-k8s-diff-port-361270", held for 4.764679271s
	I1213 09:11:42.031446  344087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-361270
	I1213 09:11:42.049155  344087 ssh_runner.go:195] Run: cat /version.json
	I1213 09:11:42.049209  344087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:11:42.049234  344087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:11:42.049301  344087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:11:42.068796  344087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/default-k8s-diff-port-361270/id_rsa Username:docker}
	I1213 09:11:42.069597  344087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/default-k8s-diff-port-361270/id_rsa Username:docker}
	I1213 09:11:41.751656  341374 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 09:11:42.167925  341374 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1213 09:11:42.751282  341374 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1213 09:11:42.752597  341374 kubeadm.go:319] 
	I1213 09:11:42.752664  341374 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1213 09:11:42.752684  341374 kubeadm.go:319] 
	I1213 09:11:42.752764  341374 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1213 09:11:42.752772  341374 kubeadm.go:319] 
	I1213 09:11:42.752793  341374 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1213 09:11:42.752875  341374 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 09:11:42.752949  341374 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 09:11:42.752958  341374 kubeadm.go:319] 
	I1213 09:11:42.753034  341374 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1213 09:11:42.753044  341374 kubeadm.go:319] 
	I1213 09:11:42.753111  341374 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 09:11:42.753120  341374 kubeadm.go:319] 
	I1213 09:11:42.753204  341374 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1213 09:11:42.753323  341374 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 09:11:42.753430  341374 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 09:11:42.753439  341374 kubeadm.go:319] 
	I1213 09:11:42.753572  341374 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 09:11:42.753640  341374 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1213 09:11:42.753646  341374 kubeadm.go:319] 
	I1213 09:11:42.753710  341374 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xb5zx2.wpbcyiswxqvt39qv \
	I1213 09:11:42.753866  341374 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ee58f815f85fc315c500e095f56504e491b6ed949bed649ee5693cfd8113bd8c \
	I1213 09:11:42.753902  341374 kubeadm.go:319] 	--control-plane 
	I1213 09:11:42.753911  341374 kubeadm.go:319] 
	I1213 09:11:42.754025  341374 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1213 09:11:42.754034  341374 kubeadm.go:319] 
	I1213 09:11:42.754142  341374 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xb5zx2.wpbcyiswxqvt39qv \
	I1213 09:11:42.754281  341374 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ee58f815f85fc315c500e095f56504e491b6ed949bed649ee5693cfd8113bd8c 
	I1213 09:11:42.756956  341374 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1213 09:11:42.757122  341374 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 09:11:42.757153  341374 cni.go:84] Creating CNI manager for ""
	I1213 09:11:42.757166  341374 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:42.758844  341374 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1213 09:11:42.223579  344087 ssh_runner.go:195] Run: systemctl --version
	I1213 09:11:42.230551  344087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 09:11:42.265663  344087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:11:42.270244  344087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:11:42.270315  344087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:11:42.279635  344087 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 09:11:42.279703  344087 start.go:496] detecting cgroup driver to use...
	I1213 09:11:42.279734  344087 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 09:11:42.279781  344087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:11:42.294570  344087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:11:42.306320  344087 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:11:42.306376  344087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:11:42.321060  344087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:11:42.333735  344087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:11:42.414383  344087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:11:42.508684  344087 docker.go:234] disabling docker service ...
	I1213 09:11:42.508761  344087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:11:42.526181  344087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:11:42.539017  344087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:11:42.622440  344087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:11:42.704402  344087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:11:42.717188  344087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:11:42.731697  344087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 09:11:42.731749  344087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:42.740988  344087 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 09:11:42.741053  344087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:42.750621  344087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:42.761122  344087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:42.770316  344087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:11:42.778511  344087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:42.787495  344087 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:42.796863  344087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:11:42.807090  344087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:11:42.815791  344087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:11:42.825127  344087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:42.918846  344087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 09:11:43.086610  344087 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 09:11:43.086694  344087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 09:11:43.091291  344087 start.go:564] Will wait 60s for crictl version
	I1213 09:11:43.091365  344087 ssh_runner.go:195] Run: which crictl
	I1213 09:11:43.096299  344087 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:11:43.126292  344087 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 09:11:43.126378  344087 ssh_runner.go:195] Run: crio --version
	I1213 09:11:43.157573  344087 ssh_runner.go:195] Run: crio --version
	I1213 09:11:43.190494  344087 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1213 09:11:43.191692  344087 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-361270 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:11:43.211056  344087 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1213 09:11:43.215891  344087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:11:43.226246  344087 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-361270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-361270 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:11:43.226384  344087 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1213 09:11:43.226446  344087 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:11:43.258381  344087 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:11:43.258406  344087 crio.go:433] Images already preloaded, skipping extraction
	I1213 09:11:43.258467  344087 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:11:43.284822  344087 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:11:43.284841  344087 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:11:43.284849  344087 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.2 crio true true} ...
	I1213 09:11:43.284940  344087 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-361270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-361270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:11:43.285008  344087 ssh_runner.go:195] Run: crio config
	I1213 09:11:43.332309  344087 cni.go:84] Creating CNI manager for ""
	I1213 09:11:43.332339  344087 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:43.332355  344087 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1213 09:11:43.332386  344087 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-361270 NodeName:default-k8s-diff-port-361270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:11:43.332564  344087 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-361270"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:11:43.332637  344087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1213 09:11:43.342269  344087 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:11:43.342337  344087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:11:43.351260  344087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1213 09:11:43.365594  344087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 09:11:43.379978  344087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1213 09:11:43.393821  344087 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:11:43.397737  344087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:11:43.407569  344087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:43.493792  344087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:11:43.522900  344087 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270 for IP: 192.168.103.2
	I1213 09:11:43.522923  344087 certs.go:195] generating shared ca certs ...
	I1213 09:11:43.522941  344087 certs.go:227] acquiring lock for ca certs: {Name:mk80892d6e61f0acf7b97550b52b3341a040901c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:43.523109  344087 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key
	I1213 09:11:43.523169  344087 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key
	I1213 09:11:43.523183  344087 certs.go:257] generating profile certs ...
	I1213 09:11:43.523333  344087 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/client.key
	I1213 09:11:43.523393  344087 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/apiserver.key.371ad0ca
	I1213 09:11:43.523446  344087 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/proxy-client.key
	I1213 09:11:43.523603  344087 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem (1338 bytes)
	W1213 09:11:43.523648  344087 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303_empty.pem, impossibly tiny 0 bytes
	I1213 09:11:43.523661  344087 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:11:43.523702  344087 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:11:43.523740  344087 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:11:43.523770  344087 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem (1675 bytes)
	I1213 09:11:43.523824  344087 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:11:43.524570  344087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:11:43.544328  344087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:11:43.566044  344087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:11:43.585917  344087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:11:43.612099  344087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1213 09:11:43.633283  344087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 09:11:43.652026  344087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:11:43.672094  344087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/default-k8s-diff-port-361270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 09:11:43.690097  344087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:11:43.707469  344087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem --> /usr/share/ca-certificates/9303.pem (1338 bytes)
	I1213 09:11:43.726381  344087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /usr/share/ca-certificates/93032.pem (1708 bytes)
	I1213 09:11:43.744577  344087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:11:43.756744  344087 ssh_runner.go:195] Run: openssl version
	I1213 09:11:43.763767  344087 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9303.pem
	I1213 09:11:43.771017  344087 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9303.pem /etc/ssl/certs/9303.pem
	I1213 09:11:43.778839  344087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9303.pem
	I1213 09:11:43.782463  344087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:37 /usr/share/ca-certificates/9303.pem
	I1213 09:11:43.782527  344087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9303.pem
	I1213 09:11:43.818284  344087 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:11:43.827511  344087 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/93032.pem
	I1213 09:11:43.835064  344087 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/93032.pem /etc/ssl/certs/93032.pem
	I1213 09:11:43.842337  344087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93032.pem
	I1213 09:11:43.846016  344087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:37 /usr/share/ca-certificates/93032.pem
	I1213 09:11:43.846063  344087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93032.pem
	I1213 09:11:43.881303  344087 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:11:43.888937  344087 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:43.896184  344087 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:11:43.903933  344087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:43.907502  344087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:43.907549  344087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:11:43.944082  344087 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:11:43.951855  344087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:11:43.955669  344087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:11:43.991663  344087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:11:44.029153  344087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:11:44.074919  344087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:11:44.128369  344087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:11:44.186869  344087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:11:44.248985  344087 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-361270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-361270 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:44.249101  344087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 09:11:44.249158  344087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:11:44.280359  344087 cri.go:89] found id: "d2c1b6b0bb4e9a0a4e33bae972a4b5976a7891a6b479c3ae241164f8934c8e1c"
	I1213 09:11:44.280379  344087 cri.go:89] found id: "12825df66baeab8e929d1992ff9bc015a6642f6e42c0188514ffa0a437bc96b6"
	I1213 09:11:44.280385  344087 cri.go:89] found id: "173e64f97cc32e0b4a6c94b6c29bf08fb8f903ffe154756eed2c3b98e5f27ab8"
	I1213 09:11:44.280389  344087 cri.go:89] found id: "1fa5b689652f2df6d1cdd70f81cf2ca28db6a2f1cdc1b09638a4e2aac8c69c47"
	I1213 09:11:44.280394  344087 cri.go:89] found id: ""
	I1213 09:11:44.280444  344087 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 09:11:44.293215  344087 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:44Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:11:44.293291  344087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:11:44.301689  344087 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 09:11:44.301707  344087 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 09:11:44.301751  344087 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 09:11:44.309164  344087 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:11:44.309929  344087 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-361270" does not appear in /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:44.310430  344087 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-5776/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-361270" cluster setting kubeconfig missing "default-k8s-diff-port-361270" context setting]
	I1213 09:11:44.311105  344087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:44.313244  344087 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 09:11:44.321748  344087 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1213 09:11:44.321781  344087 kubeadm.go:602] duration metric: took 20.067468ms to restartPrimaryControlPlane
	I1213 09:11:44.321793  344087 kubeadm.go:403] duration metric: took 72.818464ms to StartCluster
	I1213 09:11:44.321810  344087 settings.go:142] acquiring lock: {Name:mkb7db33d439ccf76620dab7051026432361ebb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:44.321874  344087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:44.323207  344087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:44.323436  344087 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:11:44.323515  344087 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 09:11:44.323629  344087 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-361270"
	I1213 09:11:44.323646  344087 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-361270"
	W1213 09:11:44.323655  344087 addons.go:248] addon storage-provisioner should already be in state true
	I1213 09:11:44.323658  344087 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-361270"
	I1213 09:11:44.323678  344087 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-361270"
	I1213 09:11:44.323684  344087 host.go:66] Checking if "default-k8s-diff-port-361270" exists ...
	I1213 09:11:44.323681  344087 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-361270"
	W1213 09:11:44.323693  344087 addons.go:248] addon dashboard should already be in state true
	I1213 09:11:44.323696  344087 config.go:182] Loaded profile config "default-k8s-diff-port-361270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:44.323708  344087 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-361270"
	I1213 09:11:44.323726  344087 host.go:66] Checking if "default-k8s-diff-port-361270" exists ...
	I1213 09:11:44.324032  344087 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-361270 --format={{.State.Status}}
	I1213 09:11:44.324228  344087 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-361270 --format={{.State.Status}}
	I1213 09:11:44.324348  344087 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-361270 --format={{.State.Status}}
	I1213 09:11:44.326790  344087 out.go:179] * Verifying Kubernetes components...
	I1213 09:11:44.328067  344087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:44.351718  344087 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-361270"
	W1213 09:11:44.351746  344087 addons.go:248] addon default-storageclass should already be in state true
	I1213 09:11:44.351773  344087 host.go:66] Checking if "default-k8s-diff-port-361270" exists ...
	I1213 09:11:44.352224  344087 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-361270 --format={{.State.Status}}
	I1213 09:11:44.353601  344087 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 09:11:44.354344  344087 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:11:44.355789  344087 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 09:11:44.355831  344087 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:11:44.355846  344087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 09:11:44.355905  344087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:11:44.356926  344087 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 09:11:44.356945  344087 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 09:11:44.356997  344087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:11:44.383632  344087 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 09:11:44.383664  344087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 09:11:44.383726  344087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:11:44.390686  344087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/default-k8s-diff-port-361270/id_rsa Username:docker}
	I1213 09:11:44.399709  344087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/default-k8s-diff-port-361270/id_rsa Username:docker}
	I1213 09:11:44.417344  344087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/default-k8s-diff-port-361270/id_rsa Username:docker}
	I1213 09:11:44.490887  344087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:11:44.507106  344087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:11:44.508512  344087 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-361270" to be "Ready" ...
	I1213 09:11:44.514757  344087 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 09:11:44.514778  344087 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 09:11:44.530818  344087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 09:11:44.532309  344087 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 09:11:44.532329  344087 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 09:11:44.549733  344087 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 09:11:44.549756  344087 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 09:11:44.567669  344087 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 09:11:44.567689  344087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 09:11:44.583898  344087 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 09:11:44.583921  344087 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 09:11:44.606749  344087 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 09:11:44.606779  344087 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 09:11:44.622774  344087 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 09:11:44.622811  344087 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 09:11:44.641884  344087 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 09:11:44.641901  344087 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 09:11:44.656101  344087 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:11:44.656134  344087 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 09:11:44.671851  344087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:11:45.753264  344087 node_ready.go:49] node "default-k8s-diff-port-361270" is "Ready"
	I1213 09:11:45.753309  344087 node_ready.go:38] duration metric: took 1.244756243s for node "default-k8s-diff-port-361270" to be "Ready" ...
	I1213 09:11:45.753326  344087 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:11:45.753448  344087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:11:46.321947  344087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.814780416s)
	I1213 09:11:46.321993  344087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.791141104s)
	I1213 09:11:46.322114  344087 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.650214324s)
	I1213 09:11:46.322141  344087 api_server.go:72] duration metric: took 1.998677092s to wait for apiserver process to appear ...
	I1213 09:11:46.322156  344087 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:11:46.322175  344087 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1213 09:11:46.323989  344087 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-361270 addons enable metrics-server
	
	I1213 09:11:46.326945  344087 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:11:46.326971  344087 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:11:46.330777  344087 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 09:11:42.760225  341374 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 09:11:42.764811  341374 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl ...
	I1213 09:11:42.764826  341374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 09:11:42.778382  341374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 09:11:43.011269  341374 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 09:11:43.011473  341374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:11:43.011614  341374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-966117 minikube.k8s.io/updated_at=2025_12_13T09_11_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453 minikube.k8s.io/name=newest-cni-966117 minikube.k8s.io/primary=true
	I1213 09:11:43.024766  341374 ops.go:34] apiserver oom_adj: -16
	I1213 09:11:43.092182  341374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:11:43.592524  341374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:11:44.092232  341374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:11:44.592612  341374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:11:45.093326  341374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:11:45.594604  341374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:11:46.092612  341374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:11:46.592340  341374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1213 09:11:43.188700  333890 pod_ready.go:104] pod "coredns-66bc5c9577-24vtj" is not "Ready", error: <nil>
	I1213 09:11:45.188511  333890 pod_ready.go:94] pod "coredns-66bc5c9577-24vtj" is "Ready"
	I1213 09:11:45.188540  333890 pod_ready.go:86] duration metric: took 33.005161968s for pod "coredns-66bc5c9577-24vtj" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:45.191278  333890 pod_ready.go:83] waiting for pod "etcd-embed-certs-379362" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:45.195648  333890 pod_ready.go:94] pod "etcd-embed-certs-379362" is "Ready"
	I1213 09:11:45.195669  333890 pod_ready.go:86] duration metric: took 4.365445ms for pod "etcd-embed-certs-379362" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:45.197645  333890 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-379362" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:45.201717  333890 pod_ready.go:94] pod "kube-apiserver-embed-certs-379362" is "Ready"
	I1213 09:11:45.201738  333890 pod_ready.go:86] duration metric: took 4.071026ms for pod "kube-apiserver-embed-certs-379362" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:45.203716  333890 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-379362" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:45.387059  333890 pod_ready.go:94] pod "kube-controller-manager-embed-certs-379362" is "Ready"
	I1213 09:11:45.387085  333890 pod_ready.go:86] duration metric: took 183.351438ms for pod "kube-controller-manager-embed-certs-379362" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:45.587168  333890 pod_ready.go:83] waiting for pod "kube-proxy-zmtpb" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:45.987736  333890 pod_ready.go:94] pod "kube-proxy-zmtpb" is "Ready"
	I1213 09:11:45.987784  333890 pod_ready.go:86] duration metric: took 400.590315ms for pod "kube-proxy-zmtpb" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:46.186953  333890 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-379362" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:46.586824  333890 pod_ready.go:94] pod "kube-scheduler-embed-certs-379362" is "Ready"
	I1213 09:11:46.586849  333890 pod_ready.go:86] duration metric: took 399.867371ms for pod "kube-scheduler-embed-certs-379362" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:11:46.586861  333890 pod_ready.go:40] duration metric: took 34.406989753s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:11:46.642249  333890 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 09:11:46.643902  333890 out.go:179] * Done! kubectl is now configured to use "embed-certs-379362" cluster and "default" namespace by default
	I1213 09:11:46.331969  344087 addons.go:530] duration metric: took 2.008466105s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1213 09:11:46.822650  344087 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1213 09:11:46.827200  344087 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:11:46.827232  344087 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:11:47.092660  341374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:11:47.592528  341374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 09:11:47.665593  341374 kubeadm.go:1114] duration metric: took 4.654274524s to wait for elevateKubeSystemPrivileges
	I1213 09:11:47.665630  341374 kubeadm.go:403] duration metric: took 12.720544726s to StartCluster
	I1213 09:11:47.665651  341374 settings.go:142] acquiring lock: {Name:mkb7db33d439ccf76620dab7051026432361ebb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:47.665733  341374 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:47.667735  341374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:11:47.668010  341374 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:11:47.668058  341374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 09:11:47.668071  341374 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 09:11:47.668177  341374 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-966117"
	I1213 09:11:47.668202  341374 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-966117"
	I1213 09:11:47.668202  341374 addons.go:70] Setting default-storageclass=true in profile "newest-cni-966117"
	I1213 09:11:47.668229  341374 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-966117"
	I1213 09:11:47.668235  341374 host.go:66] Checking if "newest-cni-966117" exists ...
	I1213 09:11:47.668235  341374 config.go:182] Loaded profile config "newest-cni-966117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:11:47.668700  341374 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:11:47.668807  341374 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:11:47.672625  341374 out.go:179] * Verifying Kubernetes components...
	I1213 09:11:47.674140  341374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:11:47.695737  341374 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 09:11:47.696963  341374 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:11:47.696993  341374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 09:11:47.697057  341374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:11:47.698629  341374 addons.go:239] Setting addon default-storageclass=true in "newest-cni-966117"
	I1213 09:11:47.698676  341374 host.go:66] Checking if "newest-cni-966117" exists ...
	I1213 09:11:47.699172  341374 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:11:47.725607  341374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:11:47.725652  341374 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 09:11:47.725669  341374 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 09:11:47.725729  341374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:11:47.755088  341374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:11:47.777930  341374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 09:11:47.849333  341374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:11:47.860013  341374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:11:47.890733  341374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 09:11:48.013352  341374 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1213 09:11:48.015001  341374 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:11:48.015068  341374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:11:48.186583  341374 api_server.go:72] duration metric: took 518.517223ms to wait for apiserver process to appear ...
	I1213 09:11:48.186610  341374 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:11:48.186632  341374 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:11:48.191764  341374 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1213 09:11:48.192505  341374 api_server.go:141] control plane version: v1.35.0-beta.0
	I1213 09:11:48.192529  341374 api_server.go:131] duration metric: took 5.911642ms to wait for apiserver health ...
	I1213 09:11:48.192538  341374 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:11:48.192836  341374 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1213 09:11:48.194129  341374 addons.go:530] duration metric: took 526.046401ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1213 09:11:48.195424  341374 system_pods.go:59] 8 kube-system pods found
	I1213 09:11:48.195454  341374 system_pods.go:61] "coredns-7d764666f9-sk2nl" [37f2d8b3-7ed6-4e82-9143-7d913b7b5f77] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 09:11:48.195467  341374 system_pods.go:61] "etcd-newest-cni-966117" [d5f60407-9ff1-41b0-8842-112a9d4e4db9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:11:48.195476  341374 system_pods.go:61] "kindnet-4ccdw" [e37a84fb-6bb4-46c9-abd8-7faff492b11f] Running
	I1213 09:11:48.195551  341374 system_pods.go:61] "kube-apiserver-newest-cni-966117" [ca4879bf-a328-40f8-bd80-067ce393ba2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:11:48.195558  341374 system_pods.go:61] "kube-controller-manager-newest-cni-966117" [384bdaff-8ec0-437d-b7b2-9186a3d77d5a] Running
	I1213 09:11:48.195565  341374 system_pods.go:61] "kube-proxy-lnm62" [38b74d8a-68b4-4816-bec2-fad7da0471f8] Running
	I1213 09:11:48.195577  341374 system_pods.go:61] "kube-scheduler-newest-cni-966117" [16be3154-0cd9-494f-bdbf-d41819d2c1fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:11:48.195584  341374 system_pods.go:61] "storage-provisioner" [31d3def0-8e7d-4759-a1b9-0fad99271611] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 09:11:48.195591  341374 system_pods.go:74] duration metric: took 3.047186ms to wait for pod list to return data ...
	I1213 09:11:48.195601  341374 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:11:48.197715  341374 default_sa.go:45] found service account: "default"
	I1213 09:11:48.197733  341374 default_sa.go:55] duration metric: took 2.126194ms for default service account to be created ...
	I1213 09:11:48.197745  341374 kubeadm.go:587] duration metric: took 529.694552ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 09:11:48.197767  341374 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:11:48.199754  341374 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:11:48.199778  341374 node_conditions.go:123] node cpu capacity is 8
	I1213 09:11:48.199800  341374 node_conditions.go:105] duration metric: took 2.028188ms to run NodePressure ...
	I1213 09:11:48.199817  341374 start.go:242] waiting for startup goroutines ...
	I1213 09:11:48.518853  341374 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-966117" context rescaled to 1 replicas
	I1213 09:11:48.518897  341374 start.go:247] waiting for cluster config update ...
	I1213 09:11:48.518929  341374 start.go:256] writing updated cluster config ...
	I1213 09:11:48.519324  341374 ssh_runner.go:195] Run: rm -f paused
	I1213 09:11:48.567790  341374 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 09:11:48.571515  341374 out.go:179] * Done! kubectl is now configured to use "newest-cni-966117" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.747910962Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.753236825Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c01602c0-91b9-4e1c-af9b-ea6580ac3d3b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.753690833Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=74f9823e-08d5-43ed-9a92-ad52ce78f37f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.756310598Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.757162052Z" level=info msg="Ran pod sandbox 6d952ee731cdc3babd72f11cfb9801946cf0aa0988fcbbdcfc8e3fcefbe60c4c with infra container: kube-system/kindnet-4ccdw/POD" id=c01602c0-91b9-4e1c-af9b-ea6580ac3d3b name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.757657975Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.760428494Z" level=info msg="Ran pod sandbox c6d469e121edf93748ec2de1ca1ed23b43e53d3effcceb9e61e7d3823980053d with infra container: kube-system/kube-proxy-lnm62/POD" id=74f9823e-08d5-43ed-9a92-ad52ce78f37f name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.760613555Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d0f9007b-c122-4787-8932-f9f504172c14 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.761580516Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=cdce1d6c-92d4-496f-a538-47051b93aa9d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.763840752Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=7fa4f632-d073-4dc6-af61-735656e6fae1 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.764106383Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=44879ddd-8c8d-459a-92cc-3aac2b3f715e name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.770352444Z" level=info msg="Creating container: kube-system/kindnet-4ccdw/kindnet-cni" id=6568614d-2824-4bbe-a0a7-3ad147047731 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.770536887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.773038962Z" level=info msg="Creating container: kube-system/kube-proxy-lnm62/kube-proxy" id=8c192d65-eff8-4e95-a8b6-e80f82d72366 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.773611139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.777401781Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.778126518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.785338689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.786327272Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.888756315Z" level=info msg="Created container fe708960f04b36bd159468fe9e9a52dd72beaeac3c6b5f636d4d3e538d1c2aaf: kube-system/kindnet-4ccdw/kindnet-cni" id=6568614d-2824-4bbe-a0a7-3ad147047731 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.889898464Z" level=info msg="Starting container: fe708960f04b36bd159468fe9e9a52dd72beaeac3c6b5f636d4d3e538d1c2aaf" id=c0bb5a14-395f-4f9f-a040-048182a373e6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.89178765Z" level=info msg="Created container f793aa493916634033b72821a08fd837b384238d3a34d8cd37fc6c55d5eabc3a: kube-system/kube-proxy-lnm62/kube-proxy" id=8c192d65-eff8-4e95-a8b6-e80f82d72366 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.892525015Z" level=info msg="Starting container: f793aa493916634033b72821a08fd837b384238d3a34d8cd37fc6c55d5eabc3a" id=1b64adc9-ecb3-41fe-8f8f-8a75fd3675ad name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.893185725Z" level=info msg="Started container" PID=1583 containerID=fe708960f04b36bd159468fe9e9a52dd72beaeac3c6b5f636d4d3e538d1c2aaf description=kube-system/kindnet-4ccdw/kindnet-cni id=c0bb5a14-395f-4f9f-a040-048182a373e6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d952ee731cdc3babd72f11cfb9801946cf0aa0988fcbbdcfc8e3fcefbe60c4c
	Dec 13 09:11:47 newest-cni-966117 crio[782]: time="2025-12-13T09:11:47.896297138Z" level=info msg="Started container" PID=1584 containerID=f793aa493916634033b72821a08fd837b384238d3a34d8cd37fc6c55d5eabc3a description=kube-system/kube-proxy-lnm62/kube-proxy id=1b64adc9-ecb3-41fe-8f8f-8a75fd3675ad name=/runtime.v1.RuntimeService/StartContainer sandboxID=c6d469e121edf93748ec2de1ca1ed23b43e53d3effcceb9e61e7d3823980053d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f793aa4939166       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   2 seconds ago       Running             kube-proxy                0                   c6d469e121edf       kube-proxy-lnm62                            kube-system
	fe708960f04b3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   6d952ee731cdc       kindnet-4ccdw                               kube-system
	ce76400d3f3c1       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   12 seconds ago      Running             etcd                      0                   24d2b30a21024       etcd-newest-cni-966117                      kube-system
	1e1134726b8f2       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   12 seconds ago      Running             kube-apiserver            0                   c806f53cfdaa6       kube-apiserver-newest-cni-966117            kube-system
	5d6eacaeb8d51       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   12 seconds ago      Running             kube-controller-manager   0                   825c2b49d7127       kube-controller-manager-newest-cni-966117   kube-system
	acaa7fab8a48d       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   12 seconds ago      Running             kube-scheduler            0                   e609d38cbb8dc       kube-scheduler-newest-cni-966117            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-966117
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-966117
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=newest-cni-966117
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_11_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:11:39 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-966117
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:11:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:11:42 +0000   Sat, 13 Dec 2025 09:11:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:11:42 +0000   Sat, 13 Dec 2025 09:11:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:11:42 +0000   Sat, 13 Dec 2025 09:11:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 13 Dec 2025 09:11:42 +0000   Sat, 13 Dec 2025 09:11:37 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-966117
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                26d31992-b6d2-4fe0-bab3-2d88f6d863be
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-966117                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-4ccdw                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-966117             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-966117    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-lnm62                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-966117             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-966117 event: Registered Node newest-cni-966117 in Controller
	
	
	==> dmesg <==
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[Dec13 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 92 30 31 19 d5 08 06
	[  +0.000699] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 37 88 97 25 dd 08 06
	[ +15.632521] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[ +20.216508] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 f7 62 38 81 f0 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[  +0.111008] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 c8 f7 8d 1a 5f 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[ +12.586108] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de cc 9f d9 2d 23 08 06
	[  +0.043792] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	[Dec13 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 62 87 e9 6b d3 37 08 06
	[  +0.000424] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	
	
	==> etcd [ce76400d3f3c1c3c59fe16f39f57403e2f29dd6fb258dbb4a9fd0b0071581b25] <==
	{"level":"warn","ts":"2025-12-13T09:11:39.048285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.054625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.062543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.070349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.076945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.089639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.096528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.103399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.109966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.120649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.127069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.133580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.140073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.146386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.152806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.159171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.165603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.172593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.179093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.186071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.201077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.207610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.213980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.221131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:39.267234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49792","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:11:49 up 54 min,  0 user,  load average: 3.30, 3.40, 2.36
	Linux newest-cni-966117 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fe708960f04b36bd159468fe9e9a52dd72beaeac3c6b5f636d4d3e538d1c2aaf] <==
	I1213 09:11:48.162902       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 09:11:48.163175       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1213 09:11:48.163349       1 main.go:148] setting mtu 1500 for CNI 
	I1213 09:11:48.163368       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 09:11:48.163404       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T09:11:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 09:11:48.364645       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 09:11:48.364693       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 09:11:48.364709       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 09:11:48.364853       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 09:11:48.696531       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 09:11:48.696561       1 metrics.go:72] Registering metrics
	I1213 09:11:48.696621       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [1e1134726b8f2a27fb2cc3466a6ffa9d9e209cb0683354900aaae9fdbaa68b3c] <==
	I1213 09:11:39.743232       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 09:11:39.743238       1 cache.go:39] Caches are synced for autoregister controller
	I1213 09:11:39.743051       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 09:11:39.744227       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:11:39.748901       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:11:39.749070       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1213 09:11:39.755127       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:11:39.944851       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:11:40.647583       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1213 09:11:40.651630       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1213 09:11:40.651646       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1213 09:11:41.136982       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:11:41.172780       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:11:41.251460       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1213 09:11:41.258584       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1213 09:11:41.259818       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:11:41.263876       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:11:41.664781       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:11:42.158101       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:11:42.167082       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1213 09:11:42.175653       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1213 09:11:47.368953       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:11:47.416679       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1213 09:11:47.568459       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:11:47.573165       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5d6eacaeb8d5178a51676490fa16dbdb4f6d02acef704ec4cba33262dbebf5f8] <==
	I1213 09:11:46.480880       1 shared_informer.go:377] "Caches are synced"
	I1213 09:11:46.480880       1 shared_informer.go:377] "Caches are synced"
	I1213 09:11:46.480947       1 range_allocator.go:177] "Sending events to api server"
	I1213 09:11:46.480991       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1213 09:11:46.481012       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:11:46.481017       1 shared_informer.go:377] "Caches are synced"
	I1213 09:11:46.481019       1 shared_informer.go:377] "Caches are synced"
	I1213 09:11:46.481124       1 shared_informer.go:377] "Caches are synced"
	I1213 09:11:46.481094       1 shared_informer.go:377] "Caches are synced"
	I1213 09:11:46.481102       1 shared_informer.go:377] "Caches are synced"
	I1213 09:11:46.481108       1 shared_informer.go:377] "Caches are synced"
	I1213 09:11:46.481110       1 shared_informer.go:377] "Caches are synced"
	I1213 09:11:46.480993       1 shared_informer.go:377] "Caches are synced"
	I1213 09:11:46.482334       1 shared_informer.go:377] "Caches are synced"
	I1213 09:11:46.482348       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 09:11:46.482354       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 09:11:46.482472       1 shared_informer.go:377] "Caches are synced"
	I1213 09:11:46.482599       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1213 09:11:46.482724       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-966117"
	I1213 09:11:46.482789       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1213 09:11:46.482890       1 shared_informer.go:377] "Caches are synced"
	I1213 09:11:46.483085       1 shared_informer.go:377] "Caches are synced"
	I1213 09:11:46.488731       1 shared_informer.go:377] "Caches are synced"
	I1213 09:11:46.491419       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-966117" podCIDRs=["10.42.0.0/24"]
	I1213 09:11:46.574357       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [f793aa493916634033b72821a08fd837b384238d3a34d8cd37fc6c55d5eabc3a] <==
	I1213 09:11:47.947924       1 server_linux.go:53] "Using iptables proxy"
	I1213 09:11:48.023325       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:11:48.124024       1 shared_informer.go:377] "Caches are synced"
	I1213 09:11:48.124068       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1213 09:11:48.124151       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:11:48.146086       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 09:11:48.146148       1 server_linux.go:136] "Using iptables Proxier"
	I1213 09:11:48.152750       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:11:48.153193       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 09:11:48.153221       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:11:48.154749       1 config.go:200] "Starting service config controller"
	I1213 09:11:48.154828       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:11:48.154888       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:11:48.154906       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:11:48.154923       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:11:48.154928       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:11:48.154965       1 config.go:309] "Starting node config controller"
	I1213 09:11:48.154986       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:11:48.154993       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:11:48.255062       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:11:48.255065       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:11:48.255157       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [acaa7fab8a48dd81fde115aeecdef4e4fe6eca8c5ce7a1700e5b9ba11dbbfb7e] <==
	E1213 09:11:39.698027       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1213 09:11:39.698182       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1213 09:11:39.698265       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1213 09:11:39.698265       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1213 09:11:40.520992       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 09:11:40.522154       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1213 09:11:40.531308       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope"
	E1213 09:11:40.532295       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1213 09:11:40.624062       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1213 09:11:40.625167       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1213 09:11:40.685453       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 09:11:40.686577       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1213 09:11:40.703945       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 09:11:40.704936       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1213 09:11:40.717123       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 09:11:40.718225       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1213 09:11:40.813661       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"
	E1213 09:11:40.814684       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1213 09:11:40.843956       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope"
	E1213 09:11:40.844905       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1213 09:11:40.877118       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope"
	E1213 09:11:40.878115       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1213 09:11:40.924663       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope"
	E1213 09:11:40.925754       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	I1213 09:11:43.791003       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 13 09:11:43 newest-cni-966117 kubelet[1316]: I1213 09:11:43.051305    1316 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-966117" podStartSLOduration=1.051283881 podStartE2EDuration="1.051283881s" podCreationTimestamp="2025-12-13 09:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:11:43.051026688 +0000 UTC m=+1.145937771" watchObservedRunningTime="2025-12-13 09:11:43.051283881 +0000 UTC m=+1.146194961"
	Dec 13 09:11:43 newest-cni-966117 kubelet[1316]: I1213 09:11:43.051447    1316 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-966117" podStartSLOduration=1.051439041 podStartE2EDuration="1.051439041s" podCreationTimestamp="2025-12-13 09:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:11:43.040578614 +0000 UTC m=+1.135489701" watchObservedRunningTime="2025-12-13 09:11:43.051439041 +0000 UTC m=+1.146350125"
	Dec 13 09:11:43 newest-cni-966117 kubelet[1316]: I1213 09:11:43.060457    1316 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-966117" podStartSLOduration=1.0604413 podStartE2EDuration="1.0604413s" podCreationTimestamp="2025-12-13 09:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:11:43.060408038 +0000 UTC m=+1.155319121" watchObservedRunningTime="2025-12-13 09:11:43.0604413 +0000 UTC m=+1.155352383"
	Dec 13 09:11:43 newest-cni-966117 kubelet[1316]: I1213 09:11:43.070772    1316 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-966117" podStartSLOduration=1.070751898 podStartE2EDuration="1.070751898s" podCreationTimestamp="2025-12-13 09:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:11:43.070622608 +0000 UTC m=+1.165533690" watchObservedRunningTime="2025-12-13 09:11:43.070751898 +0000 UTC m=+1.165662977"
	Dec 13 09:11:44 newest-cni-966117 kubelet[1316]: E1213 09:11:44.017780    1316 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-966117" containerName="kube-scheduler"
	Dec 13 09:11:44 newest-cni-966117 kubelet[1316]: E1213 09:11:44.017901    1316 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-966117" containerName="etcd"
	Dec 13 09:11:44 newest-cni-966117 kubelet[1316]: E1213 09:11:44.017978    1316 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-966117" containerName="kube-controller-manager"
	Dec 13 09:11:44 newest-cni-966117 kubelet[1316]: E1213 09:11:44.018099    1316 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-966117" containerName="kube-apiserver"
	Dec 13 09:11:45 newest-cni-966117 kubelet[1316]: E1213 09:11:45.019701    1316 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-966117" containerName="kube-apiserver"
	Dec 13 09:11:45 newest-cni-966117 kubelet[1316]: E1213 09:11:45.019813    1316 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-966117" containerName="kube-scheduler"
	Dec 13 09:11:46 newest-cni-966117 kubelet[1316]: I1213 09:11:46.531813    1316 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 13 09:11:46 newest-cni-966117 kubelet[1316]: I1213 09:11:46.532556    1316 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 13 09:11:47 newest-cni-966117 kubelet[1316]: E1213 09:11:47.370436    1316 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-966117" containerName="etcd"
	Dec 13 09:11:47 newest-cni-966117 kubelet[1316]: E1213 09:11:47.505640    1316 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-966117" containerName="kube-apiserver"
	Dec 13 09:11:47 newest-cni-966117 kubelet[1316]: I1213 09:11:47.524972    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38b74d8a-68b4-4816-bec2-fad7da0471f8-xtables-lock\") pod \"kube-proxy-lnm62\" (UID: \"38b74d8a-68b4-4816-bec2-fad7da0471f8\") " pod="kube-system/kube-proxy-lnm62"
	Dec 13 09:11:47 newest-cni-966117 kubelet[1316]: I1213 09:11:47.525012    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e37a84fb-6bb4-46c9-abd8-7faff492b11f-cni-cfg\") pod \"kindnet-4ccdw\" (UID: \"e37a84fb-6bb4-46c9-abd8-7faff492b11f\") " pod="kube-system/kindnet-4ccdw"
	Dec 13 09:11:47 newest-cni-966117 kubelet[1316]: I1213 09:11:47.525144    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/38b74d8a-68b4-4816-bec2-fad7da0471f8-kube-proxy\") pod \"kube-proxy-lnm62\" (UID: \"38b74d8a-68b4-4816-bec2-fad7da0471f8\") " pod="kube-system/kube-proxy-lnm62"
	Dec 13 09:11:47 newest-cni-966117 kubelet[1316]: I1213 09:11:47.525209    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38b74d8a-68b4-4816-bec2-fad7da0471f8-lib-modules\") pod \"kube-proxy-lnm62\" (UID: \"38b74d8a-68b4-4816-bec2-fad7da0471f8\") " pod="kube-system/kube-proxy-lnm62"
	Dec 13 09:11:47 newest-cni-966117 kubelet[1316]: I1213 09:11:47.525229    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e37a84fb-6bb4-46c9-abd8-7faff492b11f-xtables-lock\") pod \"kindnet-4ccdw\" (UID: \"e37a84fb-6bb4-46c9-abd8-7faff492b11f\") " pod="kube-system/kindnet-4ccdw"
	Dec 13 09:11:47 newest-cni-966117 kubelet[1316]: I1213 09:11:47.525268    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzmjd\" (UniqueName: \"kubernetes.io/projected/38b74d8a-68b4-4816-bec2-fad7da0471f8-kube-api-access-vzmjd\") pod \"kube-proxy-lnm62\" (UID: \"38b74d8a-68b4-4816-bec2-fad7da0471f8\") " pod="kube-system/kube-proxy-lnm62"
	Dec 13 09:11:47 newest-cni-966117 kubelet[1316]: I1213 09:11:47.525349    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e37a84fb-6bb4-46c9-abd8-7faff492b11f-lib-modules\") pod \"kindnet-4ccdw\" (UID: \"e37a84fb-6bb4-46c9-abd8-7faff492b11f\") " pod="kube-system/kindnet-4ccdw"
	Dec 13 09:11:47 newest-cni-966117 kubelet[1316]: I1213 09:11:47.525395    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6c9m\" (UniqueName: \"kubernetes.io/projected/e37a84fb-6bb4-46c9-abd8-7faff492b11f-kube-api-access-g6c9m\") pod \"kindnet-4ccdw\" (UID: \"e37a84fb-6bb4-46c9-abd8-7faff492b11f\") " pod="kube-system/kindnet-4ccdw"
	Dec 13 09:11:48 newest-cni-966117 kubelet[1316]: I1213 09:11:48.051419    1316 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-lnm62" podStartSLOduration=1.051397789 podStartE2EDuration="1.051397789s" podCreationTimestamp="2025-12-13 09:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:11:48.051077018 +0000 UTC m=+6.145988115" watchObservedRunningTime="2025-12-13 09:11:48.051397789 +0000 UTC m=+6.146308871"
	Dec 13 09:11:48 newest-cni-966117 kubelet[1316]: I1213 09:11:48.051573    1316 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-4ccdw" podStartSLOduration=1.051564619 podStartE2EDuration="1.051564619s" podCreationTimestamp="2025-12-13 09:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 09:11:48.0393771 +0000 UTC m=+6.134288183" watchObservedRunningTime="2025-12-13 09:11:48.051564619 +0000 UTC m=+6.146475702"
	Dec 13 09:11:50 newest-cni-966117 kubelet[1316]: E1213 09:11:50.050139    1316 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-966117" containerName="kube-controller-manager"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-966117 -n newest-cni-966117
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-966117 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-sk2nl storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-966117 describe pod coredns-7d764666f9-sk2nl storage-provisioner
E1213 09:11:50.684322    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-966117 describe pod coredns-7d764666f9-sk2nl storage-provisioner: exit status 1 (86.591326ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-sk2nl" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-966117 describe pod coredns-7d764666f9-sk2nl storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-379362 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-379362 --alsologtostderr -v=1: exit status 80 (1.749872169s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-379362 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:11:58.482669  348339 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:58.482942  348339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:58.482952  348339 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:58.482957  348339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:58.483519  348339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:11:58.484141  348339 out.go:368] Setting JSON to false
	I1213 09:11:58.484184  348339 mustload.go:66] Loading cluster: embed-certs-379362
	I1213 09:11:58.484585  348339 config.go:182] Loaded profile config "embed-certs-379362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:11:58.484942  348339 cli_runner.go:164] Run: docker container inspect embed-certs-379362 --format={{.State.Status}}
	I1213 09:11:58.503250  348339 host.go:66] Checking if "embed-certs-379362" exists ...
	I1213 09:11:58.503594  348339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:58.563800  348339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:87 SystemTime:2025-12-13 09:11:58.55366293 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:58.564371  348339 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-379362 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 09:11:58.614706  348339 out.go:179] * Pausing node embed-certs-379362 ... 
	I1213 09:11:58.660249  348339 host.go:66] Checking if "embed-certs-379362" exists ...
	I1213 09:11:58.660654  348339 ssh_runner.go:195] Run: systemctl --version
	I1213 09:11:58.660725  348339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-379362
	I1213 09:11:58.683283  348339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/embed-certs-379362/id_rsa Username:docker}
	I1213 09:11:58.786329  348339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:58.802252  348339 pause.go:52] kubelet running: true
	I1213 09:11:58.802318  348339 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:11:58.960405  348339 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:11:58.960502  348339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:11:59.027711  348339 cri.go:89] found id: "e042e8d516018648921a79d594d132e0b67b6c37f2a12375c8cd583846c240c5"
	I1213 09:11:59.027732  348339 cri.go:89] found id: "10e1c4512db7b3f36aabd31fe44004666790497292399373bc3a1145597ea388"
	I1213 09:11:59.027736  348339 cri.go:89] found id: "5320a87d4688e9c2b6ae07721c8f23195008ba4b560ad59dd275533d689f89c4"
	I1213 09:11:59.027740  348339 cri.go:89] found id: "3a53fec9eab255c934e57466452f9c7d72c53f5f84c9ead36878ede4e6276ea7"
	I1213 09:11:59.027743  348339 cri.go:89] found id: "358dd235ebac9b6552e5f1215bdde832a8435b00b4f0a96249ad8fa28b2c22e1"
	I1213 09:11:59.027747  348339 cri.go:89] found id: "4bc6623c8d51e745a13ec1bbde3156fa4a6306b57cced07bc50b9433f54b52ab"
	I1213 09:11:59.027750  348339 cri.go:89] found id: "be5f00248e70cd8cdd3aaa3d5a1222e8bf8bbfab76393d6a5892e2e4c34a2a74"
	I1213 09:11:59.027753  348339 cri.go:89] found id: "9f6e183787c3b40e4c300978c57f6aef4eb0fabeae2452bf40c81a0b7a5f096a"
	I1213 09:11:59.027756  348339 cri.go:89] found id: "4aa683e93939933e0c046128e063e112508837dfd7e3b3f413f70d5bccf4c6da"
	I1213 09:11:59.027762  348339 cri.go:89] found id: "140f50b697560f6e91f03f377f4c8ebca43cba10481c26850eafe6b7c2c334cb"
	I1213 09:11:59.027765  348339 cri.go:89] found id: "92eb44fb8d1880f94f9333d3c50f50efe3efbb519df874d628ac29ce85d97478"
	I1213 09:11:59.027768  348339 cri.go:89] found id: ""
	I1213 09:11:59.027806  348339 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:11:59.039752  348339 retry.go:31] will retry after 142.828049ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:59Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:11:59.183199  348339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:59.198058  348339 pause.go:52] kubelet running: false
	I1213 09:11:59.198120  348339 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:11:59.343408  348339 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:11:59.343481  348339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:11:59.416074  348339 cri.go:89] found id: "e042e8d516018648921a79d594d132e0b67b6c37f2a12375c8cd583846c240c5"
	I1213 09:11:59.416095  348339 cri.go:89] found id: "10e1c4512db7b3f36aabd31fe44004666790497292399373bc3a1145597ea388"
	I1213 09:11:59.416103  348339 cri.go:89] found id: "5320a87d4688e9c2b6ae07721c8f23195008ba4b560ad59dd275533d689f89c4"
	I1213 09:11:59.416109  348339 cri.go:89] found id: "3a53fec9eab255c934e57466452f9c7d72c53f5f84c9ead36878ede4e6276ea7"
	I1213 09:11:59.416114  348339 cri.go:89] found id: "358dd235ebac9b6552e5f1215bdde832a8435b00b4f0a96249ad8fa28b2c22e1"
	I1213 09:11:59.416119  348339 cri.go:89] found id: "4bc6623c8d51e745a13ec1bbde3156fa4a6306b57cced07bc50b9433f54b52ab"
	I1213 09:11:59.416124  348339 cri.go:89] found id: "be5f00248e70cd8cdd3aaa3d5a1222e8bf8bbfab76393d6a5892e2e4c34a2a74"
	I1213 09:11:59.416129  348339 cri.go:89] found id: "9f6e183787c3b40e4c300978c57f6aef4eb0fabeae2452bf40c81a0b7a5f096a"
	I1213 09:11:59.416133  348339 cri.go:89] found id: "4aa683e93939933e0c046128e063e112508837dfd7e3b3f413f70d5bccf4c6da"
	I1213 09:11:59.416149  348339 cri.go:89] found id: "140f50b697560f6e91f03f377f4c8ebca43cba10481c26850eafe6b7c2c334cb"
	I1213 09:11:59.416155  348339 cri.go:89] found id: "92eb44fb8d1880f94f9333d3c50f50efe3efbb519df874d628ac29ce85d97478"
	I1213 09:11:59.416159  348339 cri.go:89] found id: ""
	I1213 09:11:59.416202  348339 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:11:59.429303  348339 retry.go:31] will retry after 460.359542ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:11:59Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:11:59.890748  348339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:11:59.904862  348339 pause.go:52] kubelet running: false
	I1213 09:11:59.904922  348339 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:12:00.070195  348339 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:12:00.070309  348339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:12:00.144482  348339 cri.go:89] found id: "e042e8d516018648921a79d594d132e0b67b6c37f2a12375c8cd583846c240c5"
	I1213 09:12:00.144521  348339 cri.go:89] found id: "10e1c4512db7b3f36aabd31fe44004666790497292399373bc3a1145597ea388"
	I1213 09:12:00.144527  348339 cri.go:89] found id: "5320a87d4688e9c2b6ae07721c8f23195008ba4b560ad59dd275533d689f89c4"
	I1213 09:12:00.144531  348339 cri.go:89] found id: "3a53fec9eab255c934e57466452f9c7d72c53f5f84c9ead36878ede4e6276ea7"
	I1213 09:12:00.144534  348339 cri.go:89] found id: "358dd235ebac9b6552e5f1215bdde832a8435b00b4f0a96249ad8fa28b2c22e1"
	I1213 09:12:00.144539  348339 cri.go:89] found id: "4bc6623c8d51e745a13ec1bbde3156fa4a6306b57cced07bc50b9433f54b52ab"
	I1213 09:12:00.144543  348339 cri.go:89] found id: "be5f00248e70cd8cdd3aaa3d5a1222e8bf8bbfab76393d6a5892e2e4c34a2a74"
	I1213 09:12:00.144547  348339 cri.go:89] found id: "9f6e183787c3b40e4c300978c57f6aef4eb0fabeae2452bf40c81a0b7a5f096a"
	I1213 09:12:00.144552  348339 cri.go:89] found id: "4aa683e93939933e0c046128e063e112508837dfd7e3b3f413f70d5bccf4c6da"
	I1213 09:12:00.144560  348339 cri.go:89] found id: "140f50b697560f6e91f03f377f4c8ebca43cba10481c26850eafe6b7c2c334cb"
	I1213 09:12:00.144565  348339 cri.go:89] found id: "92eb44fb8d1880f94f9333d3c50f50efe3efbb519df874d628ac29ce85d97478"
	I1213 09:12:00.144570  348339 cri.go:89] found id: ""
	I1213 09:12:00.144621  348339 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:12:00.162624  348339 out.go:203] 
	W1213 09:12:00.163897  348339 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:12:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:12:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 09:12:00.163911  348339 out.go:285] * 
	* 
	W1213 09:12:00.168090  348339 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:12:00.169767  348339 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-379362 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-379362
helpers_test.go:244: (dbg) docker inspect embed-certs-379362:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718",
	        "Created": "2025-12-13T09:09:57.972253088Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 334094,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:11:02.096831754Z",
	            "FinishedAt": "2025-12-13T09:11:01.176746264Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718/hostname",
	        "HostsPath": "/var/lib/docker/containers/546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718/hosts",
	        "LogPath": "/var/lib/docker/containers/546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718/546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718-json.log",
	        "Name": "/embed-certs-379362",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-379362:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-379362",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718",
	                "LowerDir": "/var/lib/docker/overlay2/333a3f34b482c4011994b7785a89d76fb974d8e30de782a7f6d93af42a245744-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/333a3f34b482c4011994b7785a89d76fb974d8e30de782a7f6d93af42a245744/merged",
	                "UpperDir": "/var/lib/docker/overlay2/333a3f34b482c4011994b7785a89d76fb974d8e30de782a7f6d93af42a245744/diff",
	                "WorkDir": "/var/lib/docker/overlay2/333a3f34b482c4011994b7785a89d76fb974d8e30de782a7f6d93af42a245744/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-379362",
	                "Source": "/var/lib/docker/volumes/embed-certs-379362/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-379362",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-379362",
	                "name.minikube.sigs.k8s.io": "embed-certs-379362",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "08ef349cdb0d4d5e985430b048ccf0d713a38cfdb956d2926a3305b0570a4748",
	            "SandboxKey": "/var/run/docker/netns/08ef349cdb0d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-379362": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f3a87dafd473cb389c587dfde4fe3ed60a013e0268e1a1ec6ca1f8d2969aaec6",
	                    "EndpointID": "40003e0fd2e702633c411edb23aa9754c1d31375e16df99b89cb4a9f85f88fff",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "c2:4f:dd:24:e4:c3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-379362",
	                        "546452572cf4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-379362 -n embed-certs-379362
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-379362 -n embed-certs-379362: exit status 2 (349.485882ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-379362 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-379362 logs -n 25: (1.06449917s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p disable-driver-mounts-779931                                                                                                                                                                                                                      │ disable-driver-mounts-779931 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable metrics-server -p embed-certs-379362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │                     │
	│ stop    │ -p embed-certs-379362 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p embed-certs-379362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ image   │ no-preload-291522 image list --format=json                                                                                                                                                                                                           │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p no-preload-291522 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-361270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ image   │ old-k8s-version-234538 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ stop    │ -p default-k8s-diff-port-361270 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p old-k8s-version-234538 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ delete  │ -p no-preload-291522                                                                                                                                                                                                                                 │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p no-preload-291522                                                                                                                                                                                                                                 │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p newest-cni-966117 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p old-k8s-version-234538                                                                                                                                                                                                                            │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p old-k8s-version-234538                                                                                                                                                                                                                            │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-361270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-966117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ stop    │ -p newest-cni-966117 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ image   │ embed-certs-379362 image list --format=json                                                                                                                                                                                                          │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p embed-certs-379362 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-966117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p newest-cni-966117 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:11:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:11:59.511350  348846 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:59.511449  348846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:59.511460  348846 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:59.511466  348846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:59.511676  348846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:11:59.512155  348846 out.go:368] Setting JSON to false
	I1213 09:11:59.513404  348846 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3271,"bootTime":1765613848,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:11:59.513457  348846 start.go:143] virtualization: kvm guest
	I1213 09:11:59.515473  348846 out.go:179] * [newest-cni-966117] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:11:59.516692  348846 notify.go:221] Checking for updates...
	I1213 09:11:59.516718  348846 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:11:59.518077  348846 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:11:59.519243  348846 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:59.520461  348846 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:11:59.521788  348846 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:11:59.523074  348846 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:11:59.524842  348846 config.go:182] Loaded profile config "newest-cni-966117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:11:59.525633  348846 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:11:59.549908  348846 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 09:11:59.550053  348846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:59.608860  348846 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-13 09:11:59.5995165 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:59.608976  348846 docker.go:319] overlay module found
	I1213 09:11:59.610766  348846 out.go:179] * Using the docker driver based on existing profile
	I1213 09:11:59.611993  348846 start.go:309] selected driver: docker
	I1213 09:11:59.612013  348846 start.go:927] validating driver "docker" against &{Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:59.612124  348846 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:11:59.612924  348846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:59.671889  348846 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-13 09:11:59.660935388 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:59.672219  348846 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 09:11:59.672248  348846 cni.go:84] Creating CNI manager for ""
	I1213 09:11:59.672318  348846 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:59.672376  348846 start.go:353] cluster config:
	{Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:59.674150  348846 out.go:179] * Starting "newest-cni-966117" primary control-plane node in "newest-cni-966117" cluster
	I1213 09:11:59.675254  348846 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 09:11:59.676366  348846 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:11:59.677312  348846 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 09:11:59.677346  348846 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 09:11:59.677357  348846 cache.go:65] Caching tarball of preloaded images
	I1213 09:11:59.677391  348846 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:11:59.677456  348846 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:11:59.677470  348846 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 09:11:59.677574  348846 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/config.json ...
	I1213 09:11:59.697910  348846 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:11:59.697929  348846 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:11:59.697958  348846 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:11:59.697996  348846 start.go:360] acquireMachinesLock for newest-cni-966117: {Name:mk2b636d64beae36e9b4be83e39d6514423d9194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:11:59.698084  348846 start.go:364] duration metric: took 46.374µs to acquireMachinesLock for "newest-cni-966117"
	I1213 09:11:59.698109  348846 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:11:59.698117  348846 fix.go:54] fixHost starting: 
	I1213 09:11:59.698377  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:11:59.716186  348846 fix.go:112] recreateIfNeeded on newest-cni-966117: state=Stopped err=<nil>
	W1213 09:11:59.716211  348846 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 13 09:11:22 embed-certs-379362 crio[569]: time="2025-12-13T09:11:22.068193429Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 09:11:22 embed-certs-379362 crio[569]: time="2025-12-13T09:11:22.073072664Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 09:11:22 embed-certs-379362 crio[569]: time="2025-12-13T09:11:22.073104789Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.457621096Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0d5203bf-bd63-4616-9aa1-16ebddacae51 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.458635329Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2353ddb9-f836-460a-9b66-c6c4d3888acc name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.459788881Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a9db2f04-6732-4f69-a292-c265c7048bb1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.459938695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.464509649Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.464890849Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/dfee2cfc4345e2b4837752aa0c8fc02cd4b84b02f0eb438444715bd4da0d2d65/merged/etc/passwd: no such file or directory"
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.464934259Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/dfee2cfc4345e2b4837752aa0c8fc02cd4b84b02f0eb438444715bd4da0d2d65/merged/etc/group: no such file or directory"
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.465394502Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.497372028Z" level=info msg="Created container e042e8d516018648921a79d594d132e0b67b6c37f2a12375c8cd583846c240c5: kube-system/storage-provisioner/storage-provisioner" id=a9db2f04-6732-4f69-a292-c265c7048bb1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.498073898Z" level=info msg="Starting container: e042e8d516018648921a79d594d132e0b67b6c37f2a12375c8cd583846c240c5" id=f6434187-70aa-46c0-a768-4dbf64691c2a name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.49987934Z" level=info msg="Started container" PID=1764 containerID=e042e8d516018648921a79d594d132e0b67b6c37f2a12375c8cd583846c240c5 description=kube-system/storage-provisioner/storage-provisioner id=f6434187-70aa-46c0-a768-4dbf64691c2a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b697ea9dc3831be081a04c83f8c1e026550fa6a1c7ef219e860cecf0df638ad7
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.334790517Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8eb5f864-1612-49a2-978a-4df3c4df9d7c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.335753312Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7245c99a-9468-42ff-83e4-9027d6a2e955 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.336932662Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7/dashboard-metrics-scraper" id=7ee92983-1ea8-43af-80ac-5deed8585216 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.337082405Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.342752471Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.343270325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.369764271Z" level=info msg="Created container 140f50b697560f6e91f03f377f4c8ebca43cba10481c26850eafe6b7c2c334cb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7/dashboard-metrics-scraper" id=7ee92983-1ea8-43af-80ac-5deed8585216 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.370360262Z" level=info msg="Starting container: 140f50b697560f6e91f03f377f4c8ebca43cba10481c26850eafe6b7c2c334cb" id=1f039c4a-efa9-4c32-87b7-4d05625242fb name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.372363575Z" level=info msg="Started container" PID=1779 containerID=140f50b697560f6e91f03f377f4c8ebca43cba10481c26850eafe6b7c2c334cb description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7/dashboard-metrics-scraper id=1f039c4a-efa9-4c32-87b7-4d05625242fb name=/runtime.v1.RuntimeService/StartContainer sandboxID=67b2935be51b15084beffa8aef64863d196c3e1ff9bc6fa70b1bc4dcb8d10f64
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.465207734Z" level=info msg="Removing container: 9332308483f52b6a42623a1b946ff158c622f128cf11344ada2cb07fbcae287f" id=266efc4f-4b3b-4765-8fbd-ba6b56a294ed name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.474731921Z" level=info msg="Removed container 9332308483f52b6a42623a1b946ff158c622f128cf11344ada2cb07fbcae287f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7/dashboard-metrics-scraper" id=266efc4f-4b3b-4765-8fbd-ba6b56a294ed name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	140f50b697560       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   67b2935be51b1       dashboard-metrics-scraper-6ffb444bf9-fmtv7   kubernetes-dashboard
	e042e8d516018       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   b697ea9dc3831       storage-provisioner                          kube-system
	92eb44fb8d188       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   a5132b765475c       kubernetes-dashboard-855c9754f9-ntzt7        kubernetes-dashboard
	0437cc4ff2ab1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   0f4bb263e7f49       busybox                                      default
	10e1c4512db7b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   60d192bad9736       coredns-66bc5c9577-24vtj                     kube-system
	5320a87d4688e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   f8c6d648f5d1f       kindnet-4vk4d                                kube-system
	3a53fec9eab25       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   b697ea9dc3831       storage-provisioner                          kube-system
	358dd235ebac9       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           49 seconds ago      Running             kube-proxy                  0                   8f240409c8d0f       kube-proxy-zmtpb                             kube-system
	4bc6623c8d51e       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           52 seconds ago      Running             kube-scheduler              0                   d1c1cea873efa       kube-scheduler-embed-certs-379362            kube-system
	be5f00248e70c       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           52 seconds ago      Running             kube-apiserver              0                   22cb16dff32aa       kube-apiserver-embed-certs-379362            kube-system
	9f6e183787c3b       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           52 seconds ago      Running             kube-controller-manager     0                   b14ee77e13ece       kube-controller-manager-embed-certs-379362   kube-system
	4aa683e939399       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           52 seconds ago      Running             etcd                        0                   c67e2946d2f87       etcd-embed-certs-379362                      kube-system
	
	
	==> coredns [10e1c4512db7b3f36aabd31fe44004666790497292399373bc3a1145597ea388] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40406 - 25896 "HINFO IN 7453030945439729644.185280718633776867. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.083925589s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-379362
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-379362
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=embed-certs-379362
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_10_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:10:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-379362
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:11:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:11:41 +0000   Sat, 13 Dec 2025 09:10:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:11:41 +0000   Sat, 13 Dec 2025 09:10:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:11:41 +0000   Sat, 13 Dec 2025 09:10:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:11:41 +0000   Sat, 13 Dec 2025 09:10:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-379362
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                93d464fd-d722-496b-b12c-6011440d8ee6
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-24vtj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-embed-certs-379362                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-4vk4d                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-379362             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-embed-certs-379362    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-zmtpb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-379362             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fmtv7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ntzt7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node embed-certs-379362 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node embed-certs-379362 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x8 over 116s)  kubelet          Node embed-certs-379362 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    111s                 kubelet          Node embed-certs-379362 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  111s                 kubelet          Node embed-certs-379362 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     111s                 kubelet          Node embed-certs-379362 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node embed-certs-379362 event: Registered Node embed-certs-379362 in Controller
	  Normal  NodeReady                94s                  kubelet          Node embed-certs-379362 status is now: NodeReady
	  Normal  Starting                 53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)    kubelet          Node embed-certs-379362 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)    kubelet          Node embed-certs-379362 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)    kubelet          Node embed-certs-379362 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                  node-controller  Node embed-certs-379362 event: Registered Node embed-certs-379362 in Controller
	
	
	==> dmesg <==
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[Dec13 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 92 30 31 19 d5 08 06
	[  +0.000699] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 37 88 97 25 dd 08 06
	[ +15.632521] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[ +20.216508] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 f7 62 38 81 f0 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[  +0.111008] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 c8 f7 8d 1a 5f 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[ +12.586108] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de cc 9f d9 2d 23 08 06
	[  +0.043792] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	[Dec13 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 62 87 e9 6b d3 37 08 06
	[  +0.000424] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	
	
	==> etcd [4aa683e93939933e0c046128e063e112508837dfd7e3b3f413f70d5bccf4c6da] <==
	{"level":"warn","ts":"2025-12-13T09:11:09.913543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.923007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.929785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.936453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.944674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.951357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.959153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.967550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.974236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.984587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.991013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.997689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.006390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.013925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.022737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.031071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.040518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.048324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.055634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.063840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.070540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.094718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.101581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.108758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.161253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50958","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:12:01 up 54 min,  0 user,  load average: 3.10, 3.36, 2.36
	Linux embed-certs-379362 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5320a87d4688e9c2b6ae07721c8f23195008ba4b560ad59dd275533d689f89c4] <==
	I1213 09:11:11.848260       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 09:11:11.942508       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 09:11:11.942720       1 main.go:148] setting mtu 1500 for CNI 
	I1213 09:11:11.942747       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 09:11:11.942780       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T09:11:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 09:11:12.142753       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 09:11:12.142787       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 09:11:12.142803       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 09:11:12.143069       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 09:11:12.643594       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 09:11:12.643622       1 metrics.go:72] Registering metrics
	I1213 09:11:12.643723       1 controller.go:711] "Syncing nftables rules"
	I1213 09:11:22.054545       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 09:11:22.054607       1 main.go:301] handling current node
	I1213 09:11:32.054745       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 09:11:32.054783       1 main.go:301] handling current node
	I1213 09:11:42.054582       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 09:11:42.054638       1 main.go:301] handling current node
	I1213 09:11:52.054596       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 09:11:52.054656       1 main.go:301] handling current node
	
	
	==> kube-apiserver [be5f00248e70cd8cdd3aaa3d5a1222e8bf8bbfab76393d6a5892e2e4c34a2a74] <==
	I1213 09:11:10.676845       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 09:11:10.677290       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1213 09:11:10.677373       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 09:11:10.678202       1 aggregator.go:171] initial CRD sync complete...
	I1213 09:11:10.678229       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 09:11:10.678237       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 09:11:10.678244       1 cache.go:39] Caches are synced for autoregister controller
	I1213 09:11:10.678682       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 09:11:10.678739       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 09:11:10.679827       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1213 09:11:10.679887       1 policy_source.go:240] refreshing policies
	I1213 09:11:10.701077       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 09:11:10.707959       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:11:10.964678       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:11:10.991805       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:11:11.013411       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:11:11.020933       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:11:11.027660       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:11:11.061312       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.193.56"}
	I1213 09:11:11.071512       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.2.84"}
	I1213 09:11:11.578990       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 09:11:13.709893       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:11:13.761951       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:11:13.859788       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:11:13.859788       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9f6e183787c3b40e4c300978c57f6aef4eb0fabeae2452bf40c81a0b7a5f096a] <==
	I1213 09:11:13.283885       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1213 09:11:13.283964       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 09:11:13.284035       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-379362"
	I1213 09:11:13.284113       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 09:11:13.286319       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1213 09:11:13.306251       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 09:11:13.306273       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 09:11:13.306301       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 09:11:13.306372       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 09:11:13.307473       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 09:11:13.307536       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 09:11:13.307562       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 09:11:13.307581       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 09:11:13.307605       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 09:11:13.307587       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 09:11:13.307607       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 09:11:13.312205       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:11:13.312224       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 09:11:13.312234       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 09:11:13.313277       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:11:13.313303       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:11:13.318547       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 09:11:13.320737       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 09:11:13.324030       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 09:11:13.329418       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [358dd235ebac9b6552e5f1215bdde832a8435b00b4f0a96249ad8fa28b2c22e1] <==
	I1213 09:11:11.749661       1 server_linux.go:53] "Using iptables proxy"
	I1213 09:11:11.818249       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:11:11.919252       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:11:11.919301       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 09:11:11.919426       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:11:11.940507       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 09:11:11.940571       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:11:11.946831       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:11:11.947206       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:11:11.947243       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:11:11.948912       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:11:11.949328       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:11:11.948998       1 config.go:200] "Starting service config controller"
	I1213 09:11:11.949402       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:11:11.949064       1 config.go:309] "Starting node config controller"
	I1213 09:11:11.949078       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:11:11.949465       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:11:11.949475       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:11:11.949517       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:11:12.049848       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:11:12.049890       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:11:12.049903       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4bc6623c8d51e745a13ec1bbde3156fa4a6306b57cced07bc50b9433f54b52ab] <==
	I1213 09:11:10.137457       1 serving.go:386] Generated self-signed cert in-memory
	W1213 09:11:10.609875       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 09:11:10.609915       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 09:11:10.609956       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 09:11:10.609966       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 09:11:10.658172       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 09:11:10.658217       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:11:10.665297       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 09:11:10.666781       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 09:11:10.672407       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:11:10.673022       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:11:10.773956       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 09:11:14 embed-certs-379362 kubelet[739]: I1213 09:11:14.033792     739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js48s\" (UniqueName: \"kubernetes.io/projected/55bff937-5a81-44ea-919b-7ec357f207c3-kube-api-access-js48s\") pod \"kubernetes-dashboard-855c9754f9-ntzt7\" (UID: \"55bff937-5a81-44ea-919b-7ec357f207c3\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ntzt7"
	Dec 13 09:11:14 embed-certs-379362 kubelet[739]: I1213 09:11:14.033836     739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ac42814f-5fda-4349-bc42-6918cd2018ea-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-fmtv7\" (UID: \"ac42814f-5fda-4349-bc42-6918cd2018ea\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7"
	Dec 13 09:11:14 embed-certs-379362 kubelet[739]: I1213 09:11:14.033852     739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6gqb\" (UniqueName: \"kubernetes.io/projected/ac42814f-5fda-4349-bc42-6918cd2018ea-kube-api-access-w6gqb\") pod \"dashboard-metrics-scraper-6ffb444bf9-fmtv7\" (UID: \"ac42814f-5fda-4349-bc42-6918cd2018ea\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7"
	Dec 13 09:11:14 embed-certs-379362 kubelet[739]: I1213 09:11:14.033876     739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/55bff937-5a81-44ea-919b-7ec357f207c3-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-ntzt7\" (UID: \"55bff937-5a81-44ea-919b-7ec357f207c3\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ntzt7"
	Dec 13 09:11:14 embed-certs-379362 kubelet[739]: I1213 09:11:14.802780     739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 13 09:11:20 embed-certs-379362 kubelet[739]: I1213 09:11:20.395052     739 scope.go:117] "RemoveContainer" containerID="b7d8b8a669784c451ab685e533034850e6951883937084575c53d7ff6d20c975"
	Dec 13 09:11:20 embed-certs-379362 kubelet[739]: I1213 09:11:20.409682     739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ntzt7" podStartSLOduration=4.64814878 podStartE2EDuration="7.409654901s" podCreationTimestamp="2025-12-13 09:11:13 +0000 UTC" firstStartedPulling="2025-12-13 09:11:14.255151774 +0000 UTC m=+6.007276791" lastFinishedPulling="2025-12-13 09:11:17.016657891 +0000 UTC m=+8.768782912" observedRunningTime="2025-12-13 09:11:17.405853058 +0000 UTC m=+9.157978083" watchObservedRunningTime="2025-12-13 09:11:20.409654901 +0000 UTC m=+12.161779928"
	Dec 13 09:11:21 embed-certs-379362 kubelet[739]: I1213 09:11:21.400326     739 scope.go:117] "RemoveContainer" containerID="b7d8b8a669784c451ab685e533034850e6951883937084575c53d7ff6d20c975"
	Dec 13 09:11:21 embed-certs-379362 kubelet[739]: I1213 09:11:21.400653     739 scope.go:117] "RemoveContainer" containerID="9332308483f52b6a42623a1b946ff158c622f128cf11344ada2cb07fbcae287f"
	Dec 13 09:11:21 embed-certs-379362 kubelet[739]: E1213 09:11:21.400851     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fmtv7_kubernetes-dashboard(ac42814f-5fda-4349-bc42-6918cd2018ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7" podUID="ac42814f-5fda-4349-bc42-6918cd2018ea"
	Dec 13 09:11:22 embed-certs-379362 kubelet[739]: I1213 09:11:22.405253     739 scope.go:117] "RemoveContainer" containerID="9332308483f52b6a42623a1b946ff158c622f128cf11344ada2cb07fbcae287f"
	Dec 13 09:11:22 embed-certs-379362 kubelet[739]: E1213 09:11:22.405540     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fmtv7_kubernetes-dashboard(ac42814f-5fda-4349-bc42-6918cd2018ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7" podUID="ac42814f-5fda-4349-bc42-6918cd2018ea"
	Dec 13 09:11:30 embed-certs-379362 kubelet[739]: I1213 09:11:30.190366     739 scope.go:117] "RemoveContainer" containerID="9332308483f52b6a42623a1b946ff158c622f128cf11344ada2cb07fbcae287f"
	Dec 13 09:11:30 embed-certs-379362 kubelet[739]: E1213 09:11:30.190620     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fmtv7_kubernetes-dashboard(ac42814f-5fda-4349-bc42-6918cd2018ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7" podUID="ac42814f-5fda-4349-bc42-6918cd2018ea"
	Dec 13 09:11:42 embed-certs-379362 kubelet[739]: I1213 09:11:42.457178     739 scope.go:117] "RemoveContainer" containerID="3a53fec9eab255c934e57466452f9c7d72c53f5f84c9ead36878ede4e6276ea7"
	Dec 13 09:11:43 embed-certs-379362 kubelet[739]: I1213 09:11:43.334233     739 scope.go:117] "RemoveContainer" containerID="9332308483f52b6a42623a1b946ff158c622f128cf11344ada2cb07fbcae287f"
	Dec 13 09:11:43 embed-certs-379362 kubelet[739]: I1213 09:11:43.463852     739 scope.go:117] "RemoveContainer" containerID="9332308483f52b6a42623a1b946ff158c622f128cf11344ada2cb07fbcae287f"
	Dec 13 09:11:43 embed-certs-379362 kubelet[739]: I1213 09:11:43.464168     739 scope.go:117] "RemoveContainer" containerID="140f50b697560f6e91f03f377f4c8ebca43cba10481c26850eafe6b7c2c334cb"
	Dec 13 09:11:43 embed-certs-379362 kubelet[739]: E1213 09:11:43.464343     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fmtv7_kubernetes-dashboard(ac42814f-5fda-4349-bc42-6918cd2018ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7" podUID="ac42814f-5fda-4349-bc42-6918cd2018ea"
	Dec 13 09:11:50 embed-certs-379362 kubelet[739]: I1213 09:11:50.190311     739 scope.go:117] "RemoveContainer" containerID="140f50b697560f6e91f03f377f4c8ebca43cba10481c26850eafe6b7c2c334cb"
	Dec 13 09:11:50 embed-certs-379362 kubelet[739]: E1213 09:11:50.190614     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fmtv7_kubernetes-dashboard(ac42814f-5fda-4349-bc42-6918cd2018ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7" podUID="ac42814f-5fda-4349-bc42-6918cd2018ea"
	Dec 13 09:11:58 embed-certs-379362 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 09:11:58 embed-certs-379362 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 09:11:58 embed-certs-379362 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:11:58 embed-certs-379362 systemd[1]: kubelet.service: Consumed 1.729s CPU time.
	
	
	==> kubernetes-dashboard [92eb44fb8d1880f94f9333d3c50f50efe3efbb519df874d628ac29ce85d97478] <==
	2025/12/13 09:11:17 Using namespace: kubernetes-dashboard
	2025/12/13 09:11:17 Using in-cluster config to connect to apiserver
	2025/12/13 09:11:17 Using secret token for csrf signing
	2025/12/13 09:11:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 09:11:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 09:11:17 Successful initial request to the apiserver, version: v1.34.2
	2025/12/13 09:11:17 Generating JWE encryption key
	2025/12/13 09:11:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 09:11:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 09:11:17 Initializing JWE encryption key from synchronized object
	2025/12/13 09:11:17 Creating in-cluster Sidecar client
	2025/12/13 09:11:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 09:11:17 Serving insecurely on HTTP port: 9090
	2025/12/13 09:11:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 09:11:17 Starting overwatch
	
	
	==> storage-provisioner [3a53fec9eab255c934e57466452f9c7d72c53f5f84c9ead36878ede4e6276ea7] <==
	I1213 09:11:11.712278       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 09:11:41.715879       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e042e8d516018648921a79d594d132e0b67b6c37f2a12375c8cd583846c240c5] <==
	I1213 09:11:42.512970       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:11:42.520919       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:11:42.520969       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 09:11:42.523234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:45.978713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:50.239600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:53.838517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:56.892163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:59.915883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:59.921821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:11:59.922015       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 09:11:59.922266       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-379362_7d3afb0c-ec8d-4141-8075-f8ad21586a6d!
	I1213 09:11:59.922176       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b7f9375-c00d-46e4-bb0f-70ff28c36dd3", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-379362_7d3afb0c-ec8d-4141-8075-f8ad21586a6d became leader
	W1213 09:11:59.924764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:59.929352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:12:00.023319       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-379362_7d3afb0c-ec8d-4141-8075-f8ad21586a6d!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-379362 -n embed-certs-379362
E1213 09:12:01.867437    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/calico-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-379362 -n embed-certs-379362: exit status 2 (321.450321ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-379362 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-379362
helpers_test.go:244: (dbg) docker inspect embed-certs-379362:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718",
	        "Created": "2025-12-13T09:09:57.972253088Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 334094,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:11:02.096831754Z",
	            "FinishedAt": "2025-12-13T09:11:01.176746264Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718/hostname",
	        "HostsPath": "/var/lib/docker/containers/546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718/hosts",
	        "LogPath": "/var/lib/docker/containers/546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718/546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718-json.log",
	        "Name": "/embed-certs-379362",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-379362:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-379362",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "546452572cf44951fffa7406ed06b90f27db5c8ee2a90091c16911b9ed6a6718",
	                "LowerDir": "/var/lib/docker/overlay2/333a3f34b482c4011994b7785a89d76fb974d8e30de782a7f6d93af42a245744-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/333a3f34b482c4011994b7785a89d76fb974d8e30de782a7f6d93af42a245744/merged",
	                "UpperDir": "/var/lib/docker/overlay2/333a3f34b482c4011994b7785a89d76fb974d8e30de782a7f6d93af42a245744/diff",
	                "WorkDir": "/var/lib/docker/overlay2/333a3f34b482c4011994b7785a89d76fb974d8e30de782a7f6d93af42a245744/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-379362",
	                "Source": "/var/lib/docker/volumes/embed-certs-379362/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-379362",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-379362",
	                "name.minikube.sigs.k8s.io": "embed-certs-379362",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "08ef349cdb0d4d5e985430b048ccf0d713a38cfdb956d2926a3305b0570a4748",
	            "SandboxKey": "/var/run/docker/netns/08ef349cdb0d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-379362": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f3a87dafd473cb389c587dfde4fe3ed60a013e0268e1a1ec6ca1f8d2969aaec6",
	                    "EndpointID": "40003e0fd2e702633c411edb23aa9754c1d31375e16df99b89cb4a9f85f88fff",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "c2:4f:dd:24:e4:c3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-379362",
	                        "546452572cf4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-379362 -n embed-certs-379362
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-379362 -n embed-certs-379362: exit status 2 (321.421889ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-379362 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-379362 logs -n 25: (1.05538551s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ delete  │ -p disable-driver-mounts-779931                                                                                                                                                                                                                      │ disable-driver-mounts-779931 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:10 UTC │
	│ start   │ -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable metrics-server -p embed-certs-379362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │                     │
	│ stop    │ -p embed-certs-379362 --alsologtostderr -v=3                                                                                                                                                                                                         │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:10 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p embed-certs-379362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ image   │ no-preload-291522 image list --format=json                                                                                                                                                                                                           │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p no-preload-291522 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-361270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ image   │ old-k8s-version-234538 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ stop    │ -p default-k8s-diff-port-361270 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p old-k8s-version-234538 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ delete  │ -p no-preload-291522                                                                                                                                                                                                                                 │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p no-preload-291522                                                                                                                                                                                                                                 │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p newest-cni-966117 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p old-k8s-version-234538                                                                                                                                                                                                                            │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p old-k8s-version-234538                                                                                                                                                                                                                            │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-361270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-966117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ stop    │ -p newest-cni-966117 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ image   │ embed-certs-379362 image list --format=json                                                                                                                                                                                                          │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p embed-certs-379362 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-966117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p newest-cni-966117 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:11:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:11:59.511350  348846 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:59.511449  348846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:59.511460  348846 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:59.511466  348846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:59.511676  348846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:11:59.512155  348846 out.go:368] Setting JSON to false
	I1213 09:11:59.513404  348846 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3271,"bootTime":1765613848,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:11:59.513457  348846 start.go:143] virtualization: kvm guest
	I1213 09:11:59.515473  348846 out.go:179] * [newest-cni-966117] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:11:59.516692  348846 notify.go:221] Checking for updates...
	I1213 09:11:59.516718  348846 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:11:59.518077  348846 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:11:59.519243  348846 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:59.520461  348846 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:11:59.521788  348846 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:11:59.523074  348846 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:11:59.524842  348846 config.go:182] Loaded profile config "newest-cni-966117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:11:59.525633  348846 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:11:59.549908  348846 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 09:11:59.550053  348846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:59.608860  348846 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-13 09:11:59.5995165 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:59.608976  348846 docker.go:319] overlay module found
	I1213 09:11:59.610766  348846 out.go:179] * Using the docker driver based on existing profile
	I1213 09:11:59.611993  348846 start.go:309] selected driver: docker
	I1213 09:11:59.612013  348846 start.go:927] validating driver "docker" against &{Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:59.612124  348846 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:11:59.612924  348846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:59.671889  348846 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-13 09:11:59.660935388 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:59.672219  348846 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 09:11:59.672248  348846 cni.go:84] Creating CNI manager for ""
	I1213 09:11:59.672318  348846 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:59.672376  348846 start.go:353] cluster config:
	{Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:59.674150  348846 out.go:179] * Starting "newest-cni-966117" primary control-plane node in "newest-cni-966117" cluster
	I1213 09:11:59.675254  348846 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 09:11:59.676366  348846 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:11:59.677312  348846 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 09:11:59.677346  348846 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 09:11:59.677357  348846 cache.go:65] Caching tarball of preloaded images
	I1213 09:11:59.677391  348846 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:11:59.677456  348846 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:11:59.677470  348846 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 09:11:59.677574  348846 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/config.json ...
	I1213 09:11:59.697910  348846 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:11:59.697929  348846 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:11:59.697958  348846 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:11:59.697996  348846 start.go:360] acquireMachinesLock for newest-cni-966117: {Name:mk2b636d64beae36e9b4be83e39d6514423d9194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:11:59.698084  348846 start.go:364] duration metric: took 46.374µs to acquireMachinesLock for "newest-cni-966117"
	I1213 09:11:59.698109  348846 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:11:59.698117  348846 fix.go:54] fixHost starting: 
	I1213 09:11:59.698377  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:11:59.716186  348846 fix.go:112] recreateIfNeeded on newest-cni-966117: state=Stopped err=<nil>
	W1213 09:11:59.716211  348846 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 09:11:58.872086  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:00.872161  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 13 09:11:22 embed-certs-379362 crio[569]: time="2025-12-13T09:11:22.068193429Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 09:11:22 embed-certs-379362 crio[569]: time="2025-12-13T09:11:22.073072664Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 09:11:22 embed-certs-379362 crio[569]: time="2025-12-13T09:11:22.073104789Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.457621096Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0d5203bf-bd63-4616-9aa1-16ebddacae51 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.458635329Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2353ddb9-f836-460a-9b66-c6c4d3888acc name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.459788881Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a9db2f04-6732-4f69-a292-c265c7048bb1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.459938695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.464509649Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.464890849Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/dfee2cfc4345e2b4837752aa0c8fc02cd4b84b02f0eb438444715bd4da0d2d65/merged/etc/passwd: no such file or directory"
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.464934259Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/dfee2cfc4345e2b4837752aa0c8fc02cd4b84b02f0eb438444715bd4da0d2d65/merged/etc/group: no such file or directory"
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.465394502Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.497372028Z" level=info msg="Created container e042e8d516018648921a79d594d132e0b67b6c37f2a12375c8cd583846c240c5: kube-system/storage-provisioner/storage-provisioner" id=a9db2f04-6732-4f69-a292-c265c7048bb1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.498073898Z" level=info msg="Starting container: e042e8d516018648921a79d594d132e0b67b6c37f2a12375c8cd583846c240c5" id=f6434187-70aa-46c0-a768-4dbf64691c2a name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:11:42 embed-certs-379362 crio[569]: time="2025-12-13T09:11:42.49987934Z" level=info msg="Started container" PID=1764 containerID=e042e8d516018648921a79d594d132e0b67b6c37f2a12375c8cd583846c240c5 description=kube-system/storage-provisioner/storage-provisioner id=f6434187-70aa-46c0-a768-4dbf64691c2a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b697ea9dc3831be081a04c83f8c1e026550fa6a1c7ef219e860cecf0df638ad7
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.334790517Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8eb5f864-1612-49a2-978a-4df3c4df9d7c name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.335753312Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7245c99a-9468-42ff-83e4-9027d6a2e955 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.336932662Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7/dashboard-metrics-scraper" id=7ee92983-1ea8-43af-80ac-5deed8585216 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.337082405Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.342752471Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.343270325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.369764271Z" level=info msg="Created container 140f50b697560f6e91f03f377f4c8ebca43cba10481c26850eafe6b7c2c334cb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7/dashboard-metrics-scraper" id=7ee92983-1ea8-43af-80ac-5deed8585216 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.370360262Z" level=info msg="Starting container: 140f50b697560f6e91f03f377f4c8ebca43cba10481c26850eafe6b7c2c334cb" id=1f039c4a-efa9-4c32-87b7-4d05625242fb name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.372363575Z" level=info msg="Started container" PID=1779 containerID=140f50b697560f6e91f03f377f4c8ebca43cba10481c26850eafe6b7c2c334cb description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7/dashboard-metrics-scraper id=1f039c4a-efa9-4c32-87b7-4d05625242fb name=/runtime.v1.RuntimeService/StartContainer sandboxID=67b2935be51b15084beffa8aef64863d196c3e1ff9bc6fa70b1bc4dcb8d10f64
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.465207734Z" level=info msg="Removing container: 9332308483f52b6a42623a1b946ff158c622f128cf11344ada2cb07fbcae287f" id=266efc4f-4b3b-4765-8fbd-ba6b56a294ed name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:11:43 embed-certs-379362 crio[569]: time="2025-12-13T09:11:43.474731921Z" level=info msg="Removed container 9332308483f52b6a42623a1b946ff158c622f128cf11344ada2cb07fbcae287f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7/dashboard-metrics-scraper" id=266efc4f-4b3b-4765-8fbd-ba6b56a294ed name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	140f50b697560       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   67b2935be51b1       dashboard-metrics-scraper-6ffb444bf9-fmtv7   kubernetes-dashboard
	e042e8d516018       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   b697ea9dc3831       storage-provisioner                          kube-system
	92eb44fb8d188       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   a5132b765475c       kubernetes-dashboard-855c9754f9-ntzt7        kubernetes-dashboard
	0437cc4ff2ab1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   0f4bb263e7f49       busybox                                      default
	10e1c4512db7b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   60d192bad9736       coredns-66bc5c9577-24vtj                     kube-system
	5320a87d4688e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   f8c6d648f5d1f       kindnet-4vk4d                                kube-system
	3a53fec9eab25       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   b697ea9dc3831       storage-provisioner                          kube-system
	358dd235ebac9       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           51 seconds ago      Running             kube-proxy                  0                   8f240409c8d0f       kube-proxy-zmtpb                             kube-system
	4bc6623c8d51e       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           54 seconds ago      Running             kube-scheduler              0                   d1c1cea873efa       kube-scheduler-embed-certs-379362            kube-system
	be5f00248e70c       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           54 seconds ago      Running             kube-apiserver              0                   22cb16dff32aa       kube-apiserver-embed-certs-379362            kube-system
	9f6e183787c3b       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           54 seconds ago      Running             kube-controller-manager     0                   b14ee77e13ece       kube-controller-manager-embed-certs-379362   kube-system
	4aa683e939399       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   c67e2946d2f87       etcd-embed-certs-379362                      kube-system
	
	
	==> coredns [10e1c4512db7b3f36aabd31fe44004666790497292399373bc3a1145597ea388] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40406 - 25896 "HINFO IN 7453030945439729644.185280718633776867. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.083925589s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-379362
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-379362
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=embed-certs-379362
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_10_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:10:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-379362
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:11:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:11:41 +0000   Sat, 13 Dec 2025 09:10:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:11:41 +0000   Sat, 13 Dec 2025 09:10:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:11:41 +0000   Sat, 13 Dec 2025 09:10:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:11:41 +0000   Sat, 13 Dec 2025 09:10:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-379362
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                93d464fd-d722-496b-b12c-6011440d8ee6
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-24vtj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-embed-certs-379362                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-4vk4d                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-379362             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-embed-certs-379362    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-zmtpb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-379362             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fmtv7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ntzt7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node embed-certs-379362 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node embed-certs-379362 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 118s)  kubelet          Node embed-certs-379362 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    113s                 kubelet          Node embed-certs-379362 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  113s                 kubelet          Node embed-certs-379362 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     113s                 kubelet          Node embed-certs-379362 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node embed-certs-379362 event: Registered Node embed-certs-379362 in Controller
	  Normal  NodeReady                96s                  kubelet          Node embed-certs-379362 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node embed-certs-379362 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node embed-certs-379362 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node embed-certs-379362 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                  node-controller  Node embed-certs-379362 event: Registered Node embed-certs-379362 in Controller
	
	
	==> dmesg <==
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[Dec13 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 92 30 31 19 d5 08 06
	[  +0.000699] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 37 88 97 25 dd 08 06
	[ +15.632521] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[ +20.216508] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 f7 62 38 81 f0 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[  +0.111008] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 c8 f7 8d 1a 5f 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[ +12.586108] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de cc 9f d9 2d 23 08 06
	[  +0.043792] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	[Dec13 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 62 87 e9 6b d3 37 08 06
	[  +0.000424] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	
	
	==> etcd [4aa683e93939933e0c046128e063e112508837dfd7e3b3f413f70d5bccf4c6da] <==
	{"level":"warn","ts":"2025-12-13T09:11:09.913543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.923007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.929785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.936453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.944674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.951357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.959153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.967550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.974236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.984587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.991013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:09.997689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.006390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.013925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.022737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.031071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.040518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.048324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.055634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.063840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.070540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.094718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.101581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.108758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:10.161253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50958","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:12:03 up 54 min,  0 user,  load average: 3.10, 3.36, 2.36
	Linux embed-certs-379362 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5320a87d4688e9c2b6ae07721c8f23195008ba4b560ad59dd275533d689f89c4] <==
	I1213 09:11:11.848260       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 09:11:11.942508       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1213 09:11:11.942720       1 main.go:148] setting mtu 1500 for CNI 
	I1213 09:11:11.942747       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 09:11:11.942780       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T09:11:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 09:11:12.142753       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 09:11:12.142787       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 09:11:12.142803       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 09:11:12.143069       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 09:11:12.643594       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 09:11:12.643622       1 metrics.go:72] Registering metrics
	I1213 09:11:12.643723       1 controller.go:711] "Syncing nftables rules"
	I1213 09:11:22.054545       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 09:11:22.054607       1 main.go:301] handling current node
	I1213 09:11:32.054745       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 09:11:32.054783       1 main.go:301] handling current node
	I1213 09:11:42.054582       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 09:11:42.054638       1 main.go:301] handling current node
	I1213 09:11:52.054596       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 09:11:52.054656       1 main.go:301] handling current node
	I1213 09:12:02.062711       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1213 09:12:02.062749       1 main.go:301] handling current node
	
	
	==> kube-apiserver [be5f00248e70cd8cdd3aaa3d5a1222e8bf8bbfab76393d6a5892e2e4c34a2a74] <==
	I1213 09:11:10.676845       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1213 09:11:10.677290       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1213 09:11:10.677373       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 09:11:10.678202       1 aggregator.go:171] initial CRD sync complete...
	I1213 09:11:10.678229       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 09:11:10.678237       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 09:11:10.678244       1 cache.go:39] Caches are synced for autoregister controller
	I1213 09:11:10.678682       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 09:11:10.678739       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 09:11:10.679827       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1213 09:11:10.679887       1 policy_source.go:240] refreshing policies
	I1213 09:11:10.701077       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 09:11:10.707959       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:11:10.964678       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:11:10.991805       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:11:11.013411       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:11:11.020933       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:11:11.027660       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:11:11.061312       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.193.56"}
	I1213 09:11:11.071512       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.2.84"}
	I1213 09:11:11.578990       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 09:11:13.709893       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:11:13.761951       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:11:13.859788       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:11:13.859788       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9f6e183787c3b40e4c300978c57f6aef4eb0fabeae2452bf40c81a0b7a5f096a] <==
	I1213 09:11:13.283885       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1213 09:11:13.283964       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1213 09:11:13.284035       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-379362"
	I1213 09:11:13.284113       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1213 09:11:13.286319       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1213 09:11:13.306251       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 09:11:13.306273       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 09:11:13.306301       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 09:11:13.306372       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1213 09:11:13.307473       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 09:11:13.307536       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 09:11:13.307562       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 09:11:13.307581       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 09:11:13.307605       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1213 09:11:13.307587       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 09:11:13.307607       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 09:11:13.312205       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:11:13.312224       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1213 09:11:13.312234       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1213 09:11:13.313277       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:11:13.313303       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:11:13.318547       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 09:11:13.320737       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 09:11:13.324030       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1213 09:11:13.329418       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [358dd235ebac9b6552e5f1215bdde832a8435b00b4f0a96249ad8fa28b2c22e1] <==
	I1213 09:11:11.749661       1 server_linux.go:53] "Using iptables proxy"
	I1213 09:11:11.818249       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:11:11.919252       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:11:11.919301       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1213 09:11:11.919426       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:11:11.940507       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 09:11:11.940571       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:11:11.946831       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:11:11.947206       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:11:11.947243       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:11:11.948912       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:11:11.949328       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:11:11.948998       1 config.go:200] "Starting service config controller"
	I1213 09:11:11.949402       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:11:11.949064       1 config.go:309] "Starting node config controller"
	I1213 09:11:11.949078       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:11:11.949465       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:11:11.949475       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:11:11.949517       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:11:12.049848       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:11:12.049890       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:11:12.049903       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4bc6623c8d51e745a13ec1bbde3156fa4a6306b57cced07bc50b9433f54b52ab] <==
	I1213 09:11:10.137457       1 serving.go:386] Generated self-signed cert in-memory
	W1213 09:11:10.609875       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 09:11:10.609915       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 09:11:10.609956       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 09:11:10.609966       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 09:11:10.658172       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 09:11:10.658217       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:11:10.665297       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 09:11:10.666781       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 09:11:10.672407       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:11:10.673022       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:11:10.773956       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 09:11:14 embed-certs-379362 kubelet[739]: I1213 09:11:14.033792     739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js48s\" (UniqueName: \"kubernetes.io/projected/55bff937-5a81-44ea-919b-7ec357f207c3-kube-api-access-js48s\") pod \"kubernetes-dashboard-855c9754f9-ntzt7\" (UID: \"55bff937-5a81-44ea-919b-7ec357f207c3\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ntzt7"
	Dec 13 09:11:14 embed-certs-379362 kubelet[739]: I1213 09:11:14.033836     739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ac42814f-5fda-4349-bc42-6918cd2018ea-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-fmtv7\" (UID: \"ac42814f-5fda-4349-bc42-6918cd2018ea\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7"
	Dec 13 09:11:14 embed-certs-379362 kubelet[739]: I1213 09:11:14.033852     739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6gqb\" (UniqueName: \"kubernetes.io/projected/ac42814f-5fda-4349-bc42-6918cd2018ea-kube-api-access-w6gqb\") pod \"dashboard-metrics-scraper-6ffb444bf9-fmtv7\" (UID: \"ac42814f-5fda-4349-bc42-6918cd2018ea\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7"
	Dec 13 09:11:14 embed-certs-379362 kubelet[739]: I1213 09:11:14.033876     739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/55bff937-5a81-44ea-919b-7ec357f207c3-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-ntzt7\" (UID: \"55bff937-5a81-44ea-919b-7ec357f207c3\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ntzt7"
	Dec 13 09:11:14 embed-certs-379362 kubelet[739]: I1213 09:11:14.802780     739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 13 09:11:20 embed-certs-379362 kubelet[739]: I1213 09:11:20.395052     739 scope.go:117] "RemoveContainer" containerID="b7d8b8a669784c451ab685e533034850e6951883937084575c53d7ff6d20c975"
	Dec 13 09:11:20 embed-certs-379362 kubelet[739]: I1213 09:11:20.409682     739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ntzt7" podStartSLOduration=4.64814878 podStartE2EDuration="7.409654901s" podCreationTimestamp="2025-12-13 09:11:13 +0000 UTC" firstStartedPulling="2025-12-13 09:11:14.255151774 +0000 UTC m=+6.007276791" lastFinishedPulling="2025-12-13 09:11:17.016657891 +0000 UTC m=+8.768782912" observedRunningTime="2025-12-13 09:11:17.405853058 +0000 UTC m=+9.157978083" watchObservedRunningTime="2025-12-13 09:11:20.409654901 +0000 UTC m=+12.161779928"
	Dec 13 09:11:21 embed-certs-379362 kubelet[739]: I1213 09:11:21.400326     739 scope.go:117] "RemoveContainer" containerID="b7d8b8a669784c451ab685e533034850e6951883937084575c53d7ff6d20c975"
	Dec 13 09:11:21 embed-certs-379362 kubelet[739]: I1213 09:11:21.400653     739 scope.go:117] "RemoveContainer" containerID="9332308483f52b6a42623a1b946ff158c622f128cf11344ada2cb07fbcae287f"
	Dec 13 09:11:21 embed-certs-379362 kubelet[739]: E1213 09:11:21.400851     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fmtv7_kubernetes-dashboard(ac42814f-5fda-4349-bc42-6918cd2018ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7" podUID="ac42814f-5fda-4349-bc42-6918cd2018ea"
	Dec 13 09:11:22 embed-certs-379362 kubelet[739]: I1213 09:11:22.405253     739 scope.go:117] "RemoveContainer" containerID="9332308483f52b6a42623a1b946ff158c622f128cf11344ada2cb07fbcae287f"
	Dec 13 09:11:22 embed-certs-379362 kubelet[739]: E1213 09:11:22.405540     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fmtv7_kubernetes-dashboard(ac42814f-5fda-4349-bc42-6918cd2018ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7" podUID="ac42814f-5fda-4349-bc42-6918cd2018ea"
	Dec 13 09:11:30 embed-certs-379362 kubelet[739]: I1213 09:11:30.190366     739 scope.go:117] "RemoveContainer" containerID="9332308483f52b6a42623a1b946ff158c622f128cf11344ada2cb07fbcae287f"
	Dec 13 09:11:30 embed-certs-379362 kubelet[739]: E1213 09:11:30.190620     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fmtv7_kubernetes-dashboard(ac42814f-5fda-4349-bc42-6918cd2018ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7" podUID="ac42814f-5fda-4349-bc42-6918cd2018ea"
	Dec 13 09:11:42 embed-certs-379362 kubelet[739]: I1213 09:11:42.457178     739 scope.go:117] "RemoveContainer" containerID="3a53fec9eab255c934e57466452f9c7d72c53f5f84c9ead36878ede4e6276ea7"
	Dec 13 09:11:43 embed-certs-379362 kubelet[739]: I1213 09:11:43.334233     739 scope.go:117] "RemoveContainer" containerID="9332308483f52b6a42623a1b946ff158c622f128cf11344ada2cb07fbcae287f"
	Dec 13 09:11:43 embed-certs-379362 kubelet[739]: I1213 09:11:43.463852     739 scope.go:117] "RemoveContainer" containerID="9332308483f52b6a42623a1b946ff158c622f128cf11344ada2cb07fbcae287f"
	Dec 13 09:11:43 embed-certs-379362 kubelet[739]: I1213 09:11:43.464168     739 scope.go:117] "RemoveContainer" containerID="140f50b697560f6e91f03f377f4c8ebca43cba10481c26850eafe6b7c2c334cb"
	Dec 13 09:11:43 embed-certs-379362 kubelet[739]: E1213 09:11:43.464343     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fmtv7_kubernetes-dashboard(ac42814f-5fda-4349-bc42-6918cd2018ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7" podUID="ac42814f-5fda-4349-bc42-6918cd2018ea"
	Dec 13 09:11:50 embed-certs-379362 kubelet[739]: I1213 09:11:50.190311     739 scope.go:117] "RemoveContainer" containerID="140f50b697560f6e91f03f377f4c8ebca43cba10481c26850eafe6b7c2c334cb"
	Dec 13 09:11:50 embed-certs-379362 kubelet[739]: E1213 09:11:50.190614     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fmtv7_kubernetes-dashboard(ac42814f-5fda-4349-bc42-6918cd2018ea)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fmtv7" podUID="ac42814f-5fda-4349-bc42-6918cd2018ea"
	Dec 13 09:11:58 embed-certs-379362 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 09:11:58 embed-certs-379362 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 09:11:58 embed-certs-379362 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:11:58 embed-certs-379362 systemd[1]: kubelet.service: Consumed 1.729s CPU time.
	
	
	==> kubernetes-dashboard [92eb44fb8d1880f94f9333d3c50f50efe3efbb519df874d628ac29ce85d97478] <==
	2025/12/13 09:11:17 Using namespace: kubernetes-dashboard
	2025/12/13 09:11:17 Using in-cluster config to connect to apiserver
	2025/12/13 09:11:17 Using secret token for csrf signing
	2025/12/13 09:11:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 09:11:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 09:11:17 Successful initial request to the apiserver, version: v1.34.2
	2025/12/13 09:11:17 Generating JWE encryption key
	2025/12/13 09:11:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 09:11:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 09:11:17 Initializing JWE encryption key from synchronized object
	2025/12/13 09:11:17 Creating in-cluster Sidecar client
	2025/12/13 09:11:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 09:11:17 Serving insecurely on HTTP port: 9090
	2025/12/13 09:11:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 09:11:17 Starting overwatch
	
	
	==> storage-provisioner [3a53fec9eab255c934e57466452f9c7d72c53f5f84c9ead36878ede4e6276ea7] <==
	I1213 09:11:11.712278       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 09:11:41.715879       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e042e8d516018648921a79d594d132e0b67b6c37f2a12375c8cd583846c240c5] <==
	I1213 09:11:42.512970       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:11:42.520919       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:11:42.520969       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 09:11:42.523234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:45.978713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:50.239600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:53.838517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:56.892163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:59.915883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:59.921821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:11:59.922015       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 09:11:59.922266       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-379362_7d3afb0c-ec8d-4141-8075-f8ad21586a6d!
	I1213 09:11:59.922176       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b7f9375-c00d-46e4-bb0f-70ff28c36dd3", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-379362_7d3afb0c-ec8d-4141-8075-f8ad21586a6d became leader
	W1213 09:11:59.924764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:11:59.929352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:12:00.023319       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-379362_7d3afb0c-ec8d-4141-8075-f8ad21586a6d!
	W1213 09:12:01.933026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:12:01.937366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-379362 -n embed-certs-379362
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-379362 -n embed-certs-379362: exit status 2 (343.170399ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-379362 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-966117 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-966117 --alsologtostderr -v=1: exit status 80 (1.717704259s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-966117 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:12:10.235579  352619 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:12:10.235832  352619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:12:10.235840  352619 out.go:374] Setting ErrFile to fd 2...
	I1213 09:12:10.235845  352619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:12:10.236022  352619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:12:10.236258  352619 out.go:368] Setting JSON to false
	I1213 09:12:10.236276  352619 mustload.go:66] Loading cluster: newest-cni-966117
	I1213 09:12:10.236662  352619 config.go:182] Loaded profile config "newest-cni-966117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:12:10.237029  352619 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:10.255812  352619 host.go:66] Checking if "newest-cni-966117" exists ...
	I1213 09:12:10.256069  352619 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:12:10.312088  352619 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-13 09:12:10.302196895 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:12:10.312700  352619 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-966117 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 09:12:10.315669  352619 out.go:179] * Pausing node newest-cni-966117 ... 
	I1213 09:12:10.316766  352619 host.go:66] Checking if "newest-cni-966117" exists ...
	I1213 09:12:10.317036  352619 ssh_runner.go:195] Run: systemctl --version
	I1213 09:12:10.317071  352619 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:10.335537  352619 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:10.429957  352619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:12:10.442336  352619 pause.go:52] kubelet running: true
	I1213 09:12:10.442399  352619 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:12:10.572451  352619 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:12:10.572559  352619 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:12:10.638904  352619 cri.go:89] found id: "41a7a56c9eb619da41037920c85dbbafa98381fb7ccff7e3a621b31d4c46d1d0"
	I1213 09:12:10.638946  352619 cri.go:89] found id: "f04db8598d999ee50ac06594270b168936ff0e0a39202c2c67cfc236cc6a39fa"
	I1213 09:12:10.638952  352619 cri.go:89] found id: "5b1856a3a07129909b63b61002c6a406d40ab25115690133ad9907c9af301d4e"
	I1213 09:12:10.638956  352619 cri.go:89] found id: "fac698cd1af50b220bc1f2a9b252b26dd2966e87440e25994d1c645cbd7820ff"
	I1213 09:12:10.638959  352619 cri.go:89] found id: "8807f33081db2b27421f17eee364e12fc581fe40c63b1e2f13e70468891cab09"
	I1213 09:12:10.638963  352619 cri.go:89] found id: "0345d6de3446b527dcd60a7b59c72bf14dad6b1213e3c592d7f413738cf10d19"
	I1213 09:12:10.638967  352619 cri.go:89] found id: ""
	I1213 09:12:10.639009  352619 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:12:10.650311  352619 retry.go:31] will retry after 248.456585ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:12:10Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:12:10.899783  352619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:12:10.912869  352619 pause.go:52] kubelet running: false
	I1213 09:12:10.912938  352619 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:12:11.030652  352619 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:12:11.030721  352619 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:12:11.098025  352619 cri.go:89] found id: "41a7a56c9eb619da41037920c85dbbafa98381fb7ccff7e3a621b31d4c46d1d0"
	I1213 09:12:11.098046  352619 cri.go:89] found id: "f04db8598d999ee50ac06594270b168936ff0e0a39202c2c67cfc236cc6a39fa"
	I1213 09:12:11.098050  352619 cri.go:89] found id: "5b1856a3a07129909b63b61002c6a406d40ab25115690133ad9907c9af301d4e"
	I1213 09:12:11.098053  352619 cri.go:89] found id: "fac698cd1af50b220bc1f2a9b252b26dd2966e87440e25994d1c645cbd7820ff"
	I1213 09:12:11.098056  352619 cri.go:89] found id: "8807f33081db2b27421f17eee364e12fc581fe40c63b1e2f13e70468891cab09"
	I1213 09:12:11.098059  352619 cri.go:89] found id: "0345d6de3446b527dcd60a7b59c72bf14dad6b1213e3c592d7f413738cf10d19"
	I1213 09:12:11.098062  352619 cri.go:89] found id: ""
	I1213 09:12:11.098101  352619 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:12:11.110504  352619 retry.go:31] will retry after 557.13415ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:12:11Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:12:11.668083  352619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:12:11.683791  352619 pause.go:52] kubelet running: false
	I1213 09:12:11.683859  352619 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:12:11.800826  352619 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:12:11.800908  352619 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:12:11.871427  352619 cri.go:89] found id: "41a7a56c9eb619da41037920c85dbbafa98381fb7ccff7e3a621b31d4c46d1d0"
	I1213 09:12:11.871445  352619 cri.go:89] found id: "f04db8598d999ee50ac06594270b168936ff0e0a39202c2c67cfc236cc6a39fa"
	I1213 09:12:11.871449  352619 cri.go:89] found id: "5b1856a3a07129909b63b61002c6a406d40ab25115690133ad9907c9af301d4e"
	I1213 09:12:11.871453  352619 cri.go:89] found id: "fac698cd1af50b220bc1f2a9b252b26dd2966e87440e25994d1c645cbd7820ff"
	I1213 09:12:11.871456  352619 cri.go:89] found id: "8807f33081db2b27421f17eee364e12fc581fe40c63b1e2f13e70468891cab09"
	I1213 09:12:11.871459  352619 cri.go:89] found id: "0345d6de3446b527dcd60a7b59c72bf14dad6b1213e3c592d7f413738cf10d19"
	I1213 09:12:11.871462  352619 cri.go:89] found id: ""
	I1213 09:12:11.871523  352619 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:12:11.885534  352619 out.go:203] 
	W1213 09:12:11.887075  352619 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:12:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:12:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 09:12:11.887091  352619 out.go:285] * 
	* 
	W1213 09:12:11.891267  352619 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:12:11.893444  352619 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-966117 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-966117
helpers_test.go:244: (dbg) docker inspect newest-cni-966117:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680",
	        "Created": "2025-12-13T09:11:30.834080461Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 349050,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:11:59.742176262Z",
	            "FinishedAt": "2025-12-13T09:11:58.843004589Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680/hostname",
	        "HostsPath": "/var/lib/docker/containers/bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680/hosts",
	        "LogPath": "/var/lib/docker/containers/bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680/bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680-json.log",
	        "Name": "/newest-cni-966117",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-966117:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-966117",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680",
	                "LowerDir": "/var/lib/docker/overlay2/2fc71a6257ef0b4ec8a2db8a60ba6034bd2a1e0c36a1f8de9a430a2234a41dd0-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fc71a6257ef0b4ec8a2db8a60ba6034bd2a1e0c36a1f8de9a430a2234a41dd0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fc71a6257ef0b4ec8a2db8a60ba6034bd2a1e0c36a1f8de9a430a2234a41dd0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fc71a6257ef0b4ec8a2db8a60ba6034bd2a1e0c36a1f8de9a430a2234a41dd0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-966117",
	                "Source": "/var/lib/docker/volumes/newest-cni-966117/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-966117",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-966117",
	                "name.minikube.sigs.k8s.io": "newest-cni-966117",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fd932db9a04b78b87a56b9054e740df3804b263389389e471a2d701a446877fa",
	            "SandboxKey": "/var/run/docker/netns/fd932db9a04b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-966117": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b0e850a31badefb3e90f169f6c30ed87a36474bdd831092f642a334450d6990",
	                    "EndpointID": "9e99a2a4dcf9e083513ff48c6951f8f93080618b92d0905c18bed927485f35d4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "d2:19:ea:14:cd:7d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-966117",
	                        "bebeb5c4da8e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-966117 -n newest-cni-966117
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-966117 -n newest-cni-966117: exit status 2 (328.733369ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-966117 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p embed-certs-379362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ image   │ no-preload-291522 image list --format=json                                                                                                                                                                                                           │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p no-preload-291522 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-361270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ image   │ old-k8s-version-234538 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ stop    │ -p default-k8s-diff-port-361270 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p old-k8s-version-234538 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ delete  │ -p no-preload-291522                                                                                                                                                                                                                                 │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p no-preload-291522                                                                                                                                                                                                                                 │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p newest-cni-966117 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p old-k8s-version-234538                                                                                                                                                                                                                            │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p old-k8s-version-234538                                                                                                                                                                                                                            │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-361270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-966117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ stop    │ -p newest-cni-966117 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ image   │ embed-certs-379362 image list --format=json                                                                                                                                                                                                          │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p embed-certs-379362 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-966117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p newest-cni-966117 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:12 UTC │
	│ delete  │ -p embed-certs-379362                                                                                                                                                                                                                                │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ delete  │ -p embed-certs-379362                                                                                                                                                                                                                                │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ image   │ newest-cni-966117 image list --format=json                                                                                                                                                                                                           │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ pause   │ -p newest-cni-966117 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:11:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:11:59.511350  348846 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:59.511449  348846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:59.511460  348846 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:59.511466  348846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:59.511676  348846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:11:59.512155  348846 out.go:368] Setting JSON to false
	I1213 09:11:59.513404  348846 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3271,"bootTime":1765613848,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:11:59.513457  348846 start.go:143] virtualization: kvm guest
	I1213 09:11:59.515473  348846 out.go:179] * [newest-cni-966117] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:11:59.516692  348846 notify.go:221] Checking for updates...
	I1213 09:11:59.516718  348846 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:11:59.518077  348846 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:11:59.519243  348846 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:59.520461  348846 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:11:59.521788  348846 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:11:59.523074  348846 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:11:59.524842  348846 config.go:182] Loaded profile config "newest-cni-966117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:11:59.525633  348846 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:11:59.549908  348846 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 09:11:59.550053  348846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:59.608860  348846 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-13 09:11:59.5995165 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:59.608976  348846 docker.go:319] overlay module found
	I1213 09:11:59.610766  348846 out.go:179] * Using the docker driver based on existing profile
	I1213 09:11:59.611993  348846 start.go:309] selected driver: docker
	I1213 09:11:59.612013  348846 start.go:927] validating driver "docker" against &{Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:59.612124  348846 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:11:59.612924  348846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:59.671889  348846 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-13 09:11:59.660935388 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:59.672219  348846 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 09:11:59.672248  348846 cni.go:84] Creating CNI manager for ""
	I1213 09:11:59.672318  348846 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:59.672376  348846 start.go:353] cluster config:
	{Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:59.674150  348846 out.go:179] * Starting "newest-cni-966117" primary control-plane node in "newest-cni-966117" cluster
	I1213 09:11:59.675254  348846 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 09:11:59.676366  348846 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:11:59.677312  348846 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 09:11:59.677346  348846 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 09:11:59.677357  348846 cache.go:65] Caching tarball of preloaded images
	I1213 09:11:59.677391  348846 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:11:59.677456  348846 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:11:59.677470  348846 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 09:11:59.677574  348846 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/config.json ...
	I1213 09:11:59.697910  348846 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:11:59.697929  348846 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:11:59.697958  348846 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:11:59.697996  348846 start.go:360] acquireMachinesLock for newest-cni-966117: {Name:mk2b636d64beae36e9b4be83e39d6514423d9194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:11:59.698084  348846 start.go:364] duration metric: took 46.374µs to acquireMachinesLock for "newest-cni-966117"
	I1213 09:11:59.698109  348846 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:11:59.698117  348846 fix.go:54] fixHost starting: 
	I1213 09:11:59.698377  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:11:59.716186  348846 fix.go:112] recreateIfNeeded on newest-cni-966117: state=Stopped err=<nil>
	W1213 09:11:59.716211  348846 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 09:11:58.872086  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:00.872161  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	I1213 09:11:59.717723  348846 out.go:252] * Restarting existing docker container for "newest-cni-966117" ...
	I1213 09:11:59.717793  348846 cli_runner.go:164] Run: docker start newest-cni-966117
	I1213 09:11:59.987095  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:00.008413  348846 kic.go:430] container "newest-cni-966117" state is running.
	I1213 09:12:00.008872  348846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-966117
	I1213 09:12:00.029442  348846 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/config.json ...
	I1213 09:12:00.029747  348846 machine.go:94] provisionDockerMachine start ...
	I1213 09:12:00.029825  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:00.049967  348846 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:00.050320  348846 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1213 09:12:00.050338  348846 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:12:00.050937  348846 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42964->127.0.0.1:33138: read: connection reset by peer
	I1213 09:12:03.188177  348846 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-966117
	
	I1213 09:12:03.188220  348846 ubuntu.go:182] provisioning hostname "newest-cni-966117"
	I1213 09:12:03.188304  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:03.208635  348846 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:03.208982  348846 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1213 09:12:03.209009  348846 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-966117 && echo "newest-cni-966117" | sudo tee /etc/hostname
	I1213 09:12:03.356451  348846 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-966117
	
	I1213 09:12:03.356550  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:03.377602  348846 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:03.377902  348846 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1213 09:12:03.377928  348846 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-966117' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-966117/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-966117' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:12:03.515384  348846 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:12:03.515414  348846 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-5776/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-5776/.minikube}
	I1213 09:12:03.515443  348846 ubuntu.go:190] setting up certificates
	I1213 09:12:03.515457  348846 provision.go:84] configureAuth start
	I1213 09:12:03.515533  348846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-966117
	I1213 09:12:03.535934  348846 provision.go:143] copyHostCerts
	I1213 09:12:03.536012  348846 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem, removing ...
	I1213 09:12:03.536028  348846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem
	I1213 09:12:03.536096  348846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem (1675 bytes)
	I1213 09:12:03.536187  348846 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem, removing ...
	I1213 09:12:03.536195  348846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem
	I1213 09:12:03.536232  348846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem (1082 bytes)
	I1213 09:12:03.536293  348846 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem, removing ...
	I1213 09:12:03.536301  348846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem
	I1213 09:12:03.536324  348846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem (1123 bytes)
	I1213 09:12:03.536386  348846 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem org=jenkins.newest-cni-966117 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-966117]
	I1213 09:12:03.747763  348846 provision.go:177] copyRemoteCerts
	I1213 09:12:03.747825  348846 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:12:03.747884  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:03.768773  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:03.867273  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 09:12:03.886803  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:12:03.905579  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:12:03.923712  348846 provision.go:87] duration metric: took 408.231151ms to configureAuth
	I1213 09:12:03.923746  348846 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:12:03.923916  348846 config.go:182] Loaded profile config "newest-cni-966117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:12:03.924009  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:03.944125  348846 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:03.944478  348846 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1213 09:12:03.944524  348846 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 09:12:04.251417  348846 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 09:12:04.251442  348846 machine.go:97] duration metric: took 4.221675747s to provisionDockerMachine
	I1213 09:12:04.251456  348846 start.go:293] postStartSetup for "newest-cni-966117" (driver="docker")
	I1213 09:12:04.251472  348846 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:12:04.251566  348846 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:12:04.251603  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:04.271923  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:04.377174  348846 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:12:04.380783  348846 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:12:04.380806  348846 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:12:04.380816  348846 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/addons for local assets ...
	I1213 09:12:04.380867  348846 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/files for local assets ...
	I1213 09:12:04.380942  348846 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem -> 93032.pem in /etc/ssl/certs
	I1213 09:12:04.381032  348846 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:12:04.388870  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:12:04.406744  348846 start.go:296] duration metric: took 155.274167ms for postStartSetup
	I1213 09:12:04.406824  348846 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:12:04.406859  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:04.425060  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:04.519117  348846 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:12:04.523740  348846 fix.go:56] duration metric: took 4.825619979s for fixHost
	I1213 09:12:04.523761  348846 start.go:83] releasing machines lock for "newest-cni-966117", held for 4.825662551s
	I1213 09:12:04.523813  348846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-966117
	I1213 09:12:04.542972  348846 ssh_runner.go:195] Run: cat /version.json
	I1213 09:12:04.543037  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:04.543070  348846 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:12:04.543152  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:04.562091  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:04.562364  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:04.707160  348846 ssh_runner.go:195] Run: systemctl --version
	I1213 09:12:04.714445  348846 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 09:12:04.750084  348846 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:12:04.755144  348846 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:12:04.755236  348846 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:12:04.763878  348846 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 09:12:04.763929  348846 start.go:496] detecting cgroup driver to use...
	I1213 09:12:04.763964  348846 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 09:12:04.764013  348846 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:12:04.778097  348846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:12:04.790942  348846 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:12:04.790991  348846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:12:04.805770  348846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:12:04.818577  348846 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:12:04.898219  348846 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:12:04.978617  348846 docker.go:234] disabling docker service ...
	I1213 09:12:04.978680  348846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:12:04.992928  348846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:12:05.005978  348846 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:12:05.088758  348846 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:12:05.171196  348846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:12:05.183599  348846 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:12:05.197833  348846 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 09:12:05.197897  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.206562  348846 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 09:12:05.206647  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.215907  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.224628  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.232991  348846 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:12:05.240720  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.249141  348846 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.257427  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.265929  348846 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:12:05.273133  348846 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:12:05.281944  348846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:12:05.385356  348846 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 09:12:05.520929  348846 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 09:12:05.521001  348846 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 09:12:05.526028  348846 start.go:564] Will wait 60s for crictl version
	I1213 09:12:05.526097  348846 ssh_runner.go:195] Run: which crictl
	I1213 09:12:05.529805  348846 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:12:05.555375  348846 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 09:12:05.555477  348846 ssh_runner.go:195] Run: crio --version
	I1213 09:12:05.584114  348846 ssh_runner.go:195] Run: crio --version
	I1213 09:12:05.615327  348846 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 09:12:05.616457  348846 cli_runner.go:164] Run: docker network inspect newest-cni-966117 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:12:05.635292  348846 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1213 09:12:05.639617  348846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:12:05.651081  348846 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 09:12:05.652314  348846 kubeadm.go:884] updating cluster {Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:12:05.652516  348846 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 09:12:05.652581  348846 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:12:05.687546  348846 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:12:05.687577  348846 crio.go:433] Images already preloaded, skipping extraction
	I1213 09:12:05.687628  348846 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:12:05.715637  348846 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:12:05.715657  348846 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:12:05.715664  348846 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 09:12:05.715759  348846 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-966117 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:12:05.715822  348846 ssh_runner.go:195] Run: crio config
	I1213 09:12:05.770581  348846 cni.go:84] Creating CNI manager for ""
	I1213 09:12:05.770606  348846 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:12:05.770621  348846 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 09:12:05.770642  348846 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-966117 NodeName:newest-cni-966117 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:12:05.770778  348846 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-966117"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:12:05.770848  348846 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 09:12:05.779678  348846 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:12:05.779739  348846 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:12:05.788501  348846 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 09:12:05.802841  348846 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 09:12:05.817454  348846 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1213 09:12:05.830458  348846 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:12:05.834197  348846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:12:05.845034  348846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:12:05.926720  348846 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:12:05.997523  348846 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117 for IP: 192.168.94.2
	I1213 09:12:05.997546  348846 certs.go:195] generating shared ca certs ...
	I1213 09:12:05.997566  348846 certs.go:227] acquiring lock for ca certs: {Name:mk80892d6e61f0acf7b97550b52b3341a040901c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:05.997713  348846 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key
	I1213 09:12:05.997768  348846 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key
	I1213 09:12:05.997783  348846 certs.go:257] generating profile certs ...
	I1213 09:12:05.997915  348846 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/client.key
	I1213 09:12:05.998006  348846 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/apiserver.key.4ee2f72f
	I1213 09:12:05.998061  348846 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/proxy-client.key
	I1213 09:12:05.998197  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem (1338 bytes)
	W1213 09:12:05.998243  348846 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303_empty.pem, impossibly tiny 0 bytes
	I1213 09:12:05.998258  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:12:05.998299  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:12:05.998335  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:12:05.998375  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem (1675 bytes)
	I1213 09:12:05.998435  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:12:05.999149  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:12:06.019574  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:12:06.039769  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:12:06.061044  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:12:06.086891  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 09:12:06.112254  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 09:12:06.130707  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:12:06.149223  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 09:12:06.166960  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:12:06.184981  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem --> /usr/share/ca-certificates/9303.pem (1338 bytes)
	I1213 09:12:06.204120  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /usr/share/ca-certificates/93032.pem (1708 bytes)
	I1213 09:12:06.224026  348846 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:12:06.237075  348846 ssh_runner.go:195] Run: openssl version
	I1213 09:12:06.244173  348846 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:06.252708  348846 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:12:06.260879  348846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:06.265095  348846 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:06.265166  348846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:06.301161  348846 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:12:06.309231  348846 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9303.pem
	I1213 09:12:06.316876  348846 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9303.pem /etc/ssl/certs/9303.pem
	I1213 09:12:06.324648  348846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9303.pem
	I1213 09:12:06.328583  348846 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:37 /usr/share/ca-certificates/9303.pem
	I1213 09:12:06.328649  348846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9303.pem
	I1213 09:12:06.363986  348846 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:12:06.373146  348846 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/93032.pem
	I1213 09:12:06.380694  348846 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/93032.pem /etc/ssl/certs/93032.pem
	I1213 09:12:06.388858  348846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93032.pem
	I1213 09:12:06.392640  348846 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:37 /usr/share/ca-certificates/93032.pem
	I1213 09:12:06.392699  348846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93032.pem
	I1213 09:12:06.427810  348846 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:12:06.435920  348846 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:12:06.440255  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:12:06.477879  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:12:06.517466  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:12:06.566357  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:12:06.614264  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:12:06.667715  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:12:06.721234  348846 kubeadm.go:401] StartCluster: {Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:12:06.721340  348846 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 09:12:06.721412  348846 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:12:06.758871  348846 cri.go:89] found id: "5b1856a3a07129909b63b61002c6a406d40ab25115690133ad9907c9af301d4e"
	I1213 09:12:06.758895  348846 cri.go:89] found id: "fac698cd1af50b220bc1f2a9b252b26dd2966e87440e25994d1c645cbd7820ff"
	I1213 09:12:06.758901  348846 cri.go:89] found id: "8807f33081db2b27421f17eee364e12fc581fe40c63b1e2f13e70468891cab09"
	I1213 09:12:06.758906  348846 cri.go:89] found id: "0345d6de3446b527dcd60a7b59c72bf14dad6b1213e3c592d7f413738cf10d19"
	I1213 09:12:06.758910  348846 cri.go:89] found id: ""
	I1213 09:12:06.758964  348846 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 09:12:06.773080  348846 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:12:06Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:12:06.773166  348846 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:12:06.784547  348846 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 09:12:06.784577  348846 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 09:12:06.784628  348846 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 09:12:06.795795  348846 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:12:06.796533  348846 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-966117" does not appear in /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:12:06.796828  348846 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-5776/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-966117" cluster setting kubeconfig missing "newest-cni-966117" context setting]
	I1213 09:12:06.797449  348846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:06.800252  348846 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 09:12:06.810290  348846 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1213 09:12:06.810327  348846 kubeadm.go:602] duration metric: took 25.742497ms to restartPrimaryControlPlane
	I1213 09:12:06.810339  348846 kubeadm.go:403] duration metric: took 89.114693ms to StartCluster
	I1213 09:12:06.810357  348846 settings.go:142] acquiring lock: {Name:mkb7db33d439ccf76620dab7051026432361ebb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:06.810417  348846 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:12:06.811517  348846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:06.811783  348846 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:12:06.811972  348846 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 09:12:06.812098  348846 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-966117"
	I1213 09:12:06.812122  348846 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-966117"
	I1213 09:12:06.812123  348846 config.go:182] Loaded profile config "newest-cni-966117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:12:06.812116  348846 addons.go:70] Setting dashboard=true in profile "newest-cni-966117"
	I1213 09:12:06.812148  348846 addons.go:239] Setting addon dashboard=true in "newest-cni-966117"
	I1213 09:12:06.812140  348846 addons.go:70] Setting default-storageclass=true in profile "newest-cni-966117"
	W1213 09:12:06.812157  348846 addons.go:248] addon dashboard should already be in state true
	I1213 09:12:06.812169  348846 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-966117"
	I1213 09:12:06.812199  348846 host.go:66] Checking if "newest-cni-966117" exists ...
	W1213 09:12:06.812131  348846 addons.go:248] addon storage-provisioner should already be in state true
	I1213 09:12:06.812253  348846 host.go:66] Checking if "newest-cni-966117" exists ...
	I1213 09:12:06.812524  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:06.812689  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:06.812745  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:06.816903  348846 out.go:179] * Verifying Kubernetes components...
	I1213 09:12:06.819064  348846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:12:06.840702  348846 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 09:12:06.842378  348846 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 09:12:06.842618  348846 addons.go:239] Setting addon default-storageclass=true in "newest-cni-966117"
	W1213 09:12:06.842633  348846 addons.go:248] addon default-storageclass should already be in state true
	I1213 09:12:06.842661  348846 host.go:66] Checking if "newest-cni-966117" exists ...
	I1213 09:12:06.843100  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:06.845434  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 09:12:06.845458  348846 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 09:12:06.845538  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:06.848056  348846 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1213 09:12:02.872197  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:05.371940  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	I1213 09:12:06.852360  348846 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:12:06.852383  348846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 09:12:06.852438  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:06.883177  348846 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 09:12:06.883209  348846 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 09:12:06.883277  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:06.884226  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:06.897411  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:06.909108  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:06.978460  348846 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:12:06.993222  348846 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:12:06.993297  348846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:12:06.997850  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 09:12:06.997871  348846 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 09:12:07.007126  348846 api_server.go:72] duration metric: took 195.312265ms to wait for apiserver process to appear ...
	I1213 09:12:07.007154  348846 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:12:07.007175  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:07.011914  348846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:12:07.014555  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 09:12:07.014577  348846 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 09:12:07.019771  348846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 09:12:07.029222  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 09:12:07.029247  348846 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 09:12:07.046837  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 09:12:07.046861  348846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 09:12:07.062365  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 09:12:07.062392  348846 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 09:12:07.078224  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 09:12:07.078262  348846 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 09:12:07.092750  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 09:12:07.092770  348846 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 09:12:07.110234  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 09:12:07.110262  348846 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 09:12:07.124880  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:12:07.124902  348846 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 09:12:07.140572  348846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:12:07.991252  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 09:12:07.991294  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 09:12:07.991314  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:08.004904  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 09:12:08.004937  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 09:12:08.008203  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:08.021272  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 09:12:08.021307  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 09:12:08.507813  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:08.512905  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:12:08.512932  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:12:08.545622  348846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.525818473s)
	I1213 09:12:08.545637  348846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.533695672s)
	I1213 09:12:08.545757  348846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.40514658s)
	I1213 09:12:08.547413  348846 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-966117 addons enable metrics-server
	
	I1213 09:12:08.557103  348846 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 09:12:08.558451  348846 addons.go:530] duration metric: took 1.746499358s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1213 09:12:09.007392  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:09.011715  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:12:09.011744  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:12:09.507347  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:09.511631  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1213 09:12:09.513046  348846 api_server.go:141] control plane version: v1.35.0-beta.0
	I1213 09:12:09.513079  348846 api_server.go:131] duration metric: took 2.505917364s to wait for apiserver health ...
	I1213 09:12:09.513097  348846 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:12:09.516862  348846 system_pods.go:59] 8 kube-system pods found
	I1213 09:12:09.516915  348846 system_pods.go:61] "coredns-7d764666f9-sk2nl" [37f2d8b3-7ed6-4e82-9143-7d913b7b5f77] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 09:12:09.516936  348846 system_pods.go:61] "etcd-newest-cni-966117" [d5f60407-9ff1-41b0-8842-112a9d4e4db9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:12:09.516944  348846 system_pods.go:61] "kindnet-4ccdw" [e37a84fb-6bb4-46c9-abd8-7faff492b11f] Running
	I1213 09:12:09.516951  348846 system_pods.go:61] "kube-apiserver-newest-cni-966117" [ca4879bf-a328-40f8-bd80-067ce393ba2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:12:09.516956  348846 system_pods.go:61] "kube-controller-manager-newest-cni-966117" [384bdaff-8ec0-437d-b7b2-9186a3d77d5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:12:09.516961  348846 system_pods.go:61] "kube-proxy-lnm62" [38b74d8a-68b4-4816-bec2-fad7da0471f8] Running
	I1213 09:12:09.516966  348846 system_pods.go:61] "kube-scheduler-newest-cni-966117" [16be3154-0cd9-494f-bdbf-d41819d2c1fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:12:09.516975  348846 system_pods.go:61] "storage-provisioner" [31d3def0-8e7d-4759-a1b9-0fad99271611] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 09:12:09.516980  348846 system_pods.go:74] duration metric: took 3.876843ms to wait for pod list to return data ...
	I1213 09:12:09.516989  348846 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:12:09.519603  348846 default_sa.go:45] found service account: "default"
	I1213 09:12:09.519627  348846 default_sa.go:55] duration metric: took 2.631674ms for default service account to be created ...
	I1213 09:12:09.519643  348846 kubeadm.go:587] duration metric: took 2.707831782s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 09:12:09.519662  348846 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:12:09.522032  348846 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:12:09.522053  348846 node_conditions.go:123] node cpu capacity is 8
	I1213 09:12:09.522067  348846 node_conditions.go:105] duration metric: took 2.401048ms to run NodePressure ...
	I1213 09:12:09.522078  348846 start.go:242] waiting for startup goroutines ...
	I1213 09:12:09.522084  348846 start.go:247] waiting for cluster config update ...
	I1213 09:12:09.522094  348846 start.go:256] writing updated cluster config ...
	I1213 09:12:09.522385  348846 ssh_runner.go:195] Run: rm -f paused
	I1213 09:12:09.569110  348846 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 09:12:09.570864  348846 out.go:179] * Done! kubectl is now configured to use "newest-cni-966117" cluster and "default" namespace by default
	W1213 09:12:07.870810  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:09.873311  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.392879221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.396404964Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=da3d5237-d889-4559-bd19-0a526c660be4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.397623961Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b13cdc02-6b24-4bb6-8834-0a1ce4087dee name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.399647611Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.400346099Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.401094631Z" level=info msg="Ran pod sandbox dbc7a9a1d245917290a2350c9c23ff918a1e86e980af673dc8b6dde60701dc75 with infra container: kube-system/kindnet-4ccdw/POD" id=b13cdc02-6b24-4bb6-8834-0a1ce4087dee name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.4012391Z" level=info msg="Ran pod sandbox 4a3628a805539237a83b62ae33178a1104342d21ed03b07204e12dc9f9ff063d with infra container: kube-system/kube-proxy-lnm62/POD" id=da3d5237-d889-4559-bd19-0a526c660be4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.403216315Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=4a76780d-3394-453d-9342-8ffd55954800 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.40337097Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=5c90e684-640b-4a60-a2a1-15ce7617cbd2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.404321612Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0050f7fd-e4e2-4ee9-aace-77c83782d7d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.404608967Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=01581eab-6f2a-4997-a70b-d53a7f059823 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.405407432Z" level=info msg="Creating container: kube-system/kindnet-4ccdw/kindnet-cni" id=215ba5c9-2778-4f7e-aa5a-3aa309b1fad4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.405526337Z" level=info msg="Creating container: kube-system/kube-proxy-lnm62/kube-proxy" id=74143166-e56b-4b83-a40a-532c2d0ab6a7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.405635212Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.405533678Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.410153302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.41069086Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.410769775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.411217908Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.440174607Z" level=info msg="Created container 41a7a56c9eb619da41037920c85dbbafa98381fb7ccff7e3a621b31d4c46d1d0: kube-system/kindnet-4ccdw/kindnet-cni" id=215ba5c9-2778-4f7e-aa5a-3aa309b1fad4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.440871905Z" level=info msg="Starting container: 41a7a56c9eb619da41037920c85dbbafa98381fb7ccff7e3a621b31d4c46d1d0" id=fd514a15-1bd0-451e-a6fd-954bf21af68c name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.443076602Z" level=info msg="Started container" PID=1052 containerID=41a7a56c9eb619da41037920c85dbbafa98381fb7ccff7e3a621b31d4c46d1d0 description=kube-system/kindnet-4ccdw/kindnet-cni id=fd514a15-1bd0-451e-a6fd-954bf21af68c name=/runtime.v1.RuntimeService/StartContainer sandboxID=dbc7a9a1d245917290a2350c9c23ff918a1e86e980af673dc8b6dde60701dc75
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.444971181Z" level=info msg="Created container f04db8598d999ee50ac06594270b168936ff0e0a39202c2c67cfc236cc6a39fa: kube-system/kube-proxy-lnm62/kube-proxy" id=74143166-e56b-4b83-a40a-532c2d0ab6a7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.445624924Z" level=info msg="Starting container: f04db8598d999ee50ac06594270b168936ff0e0a39202c2c67cfc236cc6a39fa" id=900bbe82-6efb-439f-bfeb-b2de88c7ab35 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.44843872Z" level=info msg="Started container" PID=1053 containerID=f04db8598d999ee50ac06594270b168936ff0e0a39202c2c67cfc236cc6a39fa description=kube-system/kube-proxy-lnm62/kube-proxy id=900bbe82-6efb-439f-bfeb-b2de88c7ab35 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a3628a805539237a83b62ae33178a1104342d21ed03b07204e12dc9f9ff063d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	41a7a56c9eb61       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   dbc7a9a1d2459       kindnet-4ccdw                               kube-system
	f04db8598d999       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   4 seconds ago       Running             kube-proxy                1                   4a3628a805539       kube-proxy-lnm62                            kube-system
	5b1856a3a0712       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   6 seconds ago       Running             kube-scheduler            1                   23d6ff2b19b45       kube-scheduler-newest-cni-966117            kube-system
	fac698cd1af50       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   6 seconds ago       Running             kube-apiserver            1                   9eaa28c3fcb95       kube-apiserver-newest-cni-966117            kube-system
	8807f33081db2       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   6 seconds ago       Running             etcd                      1                   7029ba7fec568       etcd-newest-cni-966117                      kube-system
	0345d6de3446b       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   6 seconds ago       Running             kube-controller-manager   1                   17db9ca69bf6e       kube-controller-manager-newest-cni-966117   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-966117
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-966117
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=newest-cni-966117
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_11_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:11:39 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-966117
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:12:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:12:08 +0000   Sat, 13 Dec 2025 09:11:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:12:08 +0000   Sat, 13 Dec 2025 09:11:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:12:08 +0000   Sat, 13 Dec 2025 09:11:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 13 Dec 2025 09:12:08 +0000   Sat, 13 Dec 2025 09:11:37 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-966117
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                26d31992-b6d2-4fe0-bab3-2d88f6d863be
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-966117                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-4ccdw                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-966117             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-newest-cni-966117    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-lnm62                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-966117             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node newest-cni-966117 event: Registered Node newest-cni-966117 in Controller
	  Normal  RegisteredNode  1s    node-controller  Node newest-cni-966117 event: Registered Node newest-cni-966117 in Controller
	
	
	==> dmesg <==
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[Dec13 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 92 30 31 19 d5 08 06
	[  +0.000699] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 37 88 97 25 dd 08 06
	[ +15.632521] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[ +20.216508] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 f7 62 38 81 f0 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[  +0.111008] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 c8 f7 8d 1a 5f 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[ +12.586108] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de cc 9f d9 2d 23 08 06
	[  +0.043792] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	[Dec13 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 62 87 e9 6b d3 37 08 06
	[  +0.000424] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	
	
	==> etcd [8807f33081db2b27421f17eee364e12fc581fe40c63b1e2f13e70468891cab09] <==
	{"level":"warn","ts":"2025-12-13T09:12:07.381867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.387777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.394059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.405066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.412739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.419252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.425440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.431770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.438101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.450599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.464668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.470803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.477182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.483693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.489894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.497935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.504119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.510248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.516555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.522649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.530341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.543815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.556085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.562354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.616172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52720","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:12:12 up 54 min,  0 user,  load average: 2.62, 3.25, 2.33
	Linux newest-cni-966117 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [41a7a56c9eb619da41037920c85dbbafa98381fb7ccff7e3a621b31d4c46d1d0] <==
	I1213 09:12:08.660013       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 09:12:08.660292       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1213 09:12:08.660425       1 main.go:148] setting mtu 1500 for CNI 
	I1213 09:12:08.660440       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 09:12:08.660463       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T09:12:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 09:12:08.859773       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 09:12:08.860684       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 09:12:08.860702       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 09:12:08.860868       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 09:12:09.261310       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 09:12:09.261342       1 metrics.go:72] Registering metrics
	I1213 09:12:09.261418       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [fac698cd1af50b220bc1f2a9b252b26dd2966e87440e25994d1c645cbd7820ff] <==
	I1213 09:12:08.073587       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 09:12:08.073594       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 09:12:08.073601       1 cache.go:39] Caches are synced for autoregister controller
	I1213 09:12:08.073685       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 09:12:08.073447       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:08.074262       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 09:12:08.074327       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 09:12:08.080703       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 09:12:08.096152       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 09:12:08.100759       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 09:12:08.107955       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:08.107980       1 policy_source.go:248] refreshing policies
	I1213 09:12:08.116600       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:12:08.123841       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:12:08.335521       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:12:08.361785       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:12:08.398374       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:12:08.407105       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:12:08.458341       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.42.77"}
	I1213 09:12:08.471084       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.103.231"}
	I1213 09:12:08.978177       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1213 09:12:11.674671       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:12:11.726234       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:12:11.825366       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:12:11.927299       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [0345d6de3446b527dcd60a7b59c72bf14dad6b1213e3c592d7f413738cf10d19] <==
	I1213 09:12:11.226775       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.226760       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.227359       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.227441       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.227538       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1213 09:12:11.226514       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1213 09:12:11.227628       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-966117"
	I1213 09:12:11.227677       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1213 09:12:11.227694       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.227724       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.226774       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.226785       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.226802       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.227634       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:12:11.228137       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.228162       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.226793       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.226765       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.228630       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.232242       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.233838       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:12:11.326710       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.326729       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 09:12:11.326733       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 09:12:11.334082       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [f04db8598d999ee50ac06594270b168936ff0e0a39202c2c67cfc236cc6a39fa] <==
	I1213 09:12:08.488364       1 server_linux.go:53] "Using iptables proxy"
	I1213 09:12:08.554756       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:12:08.654960       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:08.654998       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1213 09:12:08.655085       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:12:08.674142       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 09:12:08.674210       1 server_linux.go:136] "Using iptables Proxier"
	I1213 09:12:08.679418       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:12:08.679888       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 09:12:08.679931       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:12:08.682354       1 config.go:200] "Starting service config controller"
	I1213 09:12:08.682379       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:12:08.682455       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:12:08.682477       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:12:08.682461       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:12:08.682508       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:12:08.682564       1 config.go:309] "Starting node config controller"
	I1213 09:12:08.682584       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:12:08.682593       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:12:08.782592       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:12:08.782614       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:12:08.782633       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5b1856a3a07129909b63b61002c6a406d40ab25115690133ad9907c9af301d4e] <==
	I1213 09:12:06.849221       1 serving.go:386] Generated self-signed cert in-memory
	W1213 09:12:07.992370       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 09:12:07.992537       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 09:12:07.992593       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 09:12:07.992608       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 09:12:08.032216       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1213 09:12:08.032315       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:12:08.035141       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:12:08.035184       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:12:08.035322       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 09:12:08.036161       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 09:12:08.136124       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.113849     674 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-966117"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.113929     674 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-966117"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.114051     674 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-966117"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.120433     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e37a84fb-6bb4-46c9-abd8-7faff492b11f-cni-cfg\") pod \"kindnet-4ccdw\" (UID: \"e37a84fb-6bb4-46c9-abd8-7faff492b11f\") " pod="kube-system/kindnet-4ccdw"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.120480     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e37a84fb-6bb4-46c9-abd8-7faff492b11f-lib-modules\") pod \"kindnet-4ccdw\" (UID: \"e37a84fb-6bb4-46c9-abd8-7faff492b11f\") " pod="kube-system/kindnet-4ccdw"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.120544     674 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-966117"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.120627     674 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-966117"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.120661     674 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.120801     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e37a84fb-6bb4-46c9-abd8-7faff492b11f-xtables-lock\") pod \"kindnet-4ccdw\" (UID: \"e37a84fb-6bb4-46c9-abd8-7faff492b11f\") " pod="kube-system/kindnet-4ccdw"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.120846     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38b74d8a-68b4-4816-bec2-fad7da0471f8-lib-modules\") pod \"kube-proxy-lnm62\" (UID: \"38b74d8a-68b4-4816-bec2-fad7da0471f8\") " pod="kube-system/kube-proxy-lnm62"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.120898     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38b74d8a-68b4-4816-bec2-fad7da0471f8-xtables-lock\") pod \"kube-proxy-lnm62\" (UID: \"38b74d8a-68b4-4816-bec2-fad7da0471f8\") " pod="kube-system/kube-proxy-lnm62"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.121675     674 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: E1213 09:12:08.124818     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-966117\" already exists" pod="kube-system/kube-scheduler-newest-cni-966117"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: E1213 09:12:08.124896     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-966117" containerName="kube-scheduler"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: E1213 09:12:08.126064     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-966117\" already exists" pod="kube-system/etcd-newest-cni-966117"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: E1213 09:12:08.126135     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-966117" containerName="etcd"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: E1213 09:12:08.126516     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-966117\" already exists" pod="kube-system/kube-apiserver-newest-cni-966117"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: E1213 09:12:08.126590     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-966117" containerName="kube-apiserver"
	Dec 13 09:12:09 newest-cni-966117 kubelet[674]: E1213 09:12:09.119999     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-966117" containerName="etcd"
	Dec 13 09:12:09 newest-cni-966117 kubelet[674]: E1213 09:12:09.120086     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-966117" containerName="kube-scheduler"
	Dec 13 09:12:09 newest-cni-966117 kubelet[674]: E1213 09:12:09.120247     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-966117" containerName="kube-apiserver"
	Dec 13 09:12:09 newest-cni-966117 kubelet[674]: E1213 09:12:09.978128     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-966117" containerName="kube-controller-manager"
	Dec 13 09:12:10 newest-cni-966117 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 09:12:10 newest-cni-966117 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 09:12:10 newest-cni-966117 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-966117 -n newest-cni-966117
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-966117 -n newest-cni-966117: exit status 2 (325.542902ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-966117 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-sk2nl storage-provisioner dashboard-metrics-scraper-867fb5f87b-srzqp kubernetes-dashboard-b84665fb8-pfj5v
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-966117 describe pod coredns-7d764666f9-sk2nl storage-provisioner dashboard-metrics-scraper-867fb5f87b-srzqp kubernetes-dashboard-b84665fb8-pfj5v
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-966117 describe pod coredns-7d764666f9-sk2nl storage-provisioner dashboard-metrics-scraper-867fb5f87b-srzqp kubernetes-dashboard-b84665fb8-pfj5v: exit status 1 (65.923801ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-sk2nl" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-srzqp" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-pfj5v" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-966117 describe pod coredns-7d764666f9-sk2nl storage-provisioner dashboard-metrics-scraper-867fb5f87b-srzqp kubernetes-dashboard-b84665fb8-pfj5v: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-966117
helpers_test.go:244: (dbg) docker inspect newest-cni-966117:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680",
	        "Created": "2025-12-13T09:11:30.834080461Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 349050,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:11:59.742176262Z",
	            "FinishedAt": "2025-12-13T09:11:58.843004589Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680/hostname",
	        "HostsPath": "/var/lib/docker/containers/bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680/hosts",
	        "LogPath": "/var/lib/docker/containers/bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680/bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680-json.log",
	        "Name": "/newest-cni-966117",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-966117:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-966117",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bebeb5c4da8e61745fd610abdce0ca6712013521e87d833b96aef6cd50c5b680",
	                "LowerDir": "/var/lib/docker/overlay2/2fc71a6257ef0b4ec8a2db8a60ba6034bd2a1e0c36a1f8de9a430a2234a41dd0-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fc71a6257ef0b4ec8a2db8a60ba6034bd2a1e0c36a1f8de9a430a2234a41dd0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fc71a6257ef0b4ec8a2db8a60ba6034bd2a1e0c36a1f8de9a430a2234a41dd0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fc71a6257ef0b4ec8a2db8a60ba6034bd2a1e0c36a1f8de9a430a2234a41dd0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-966117",
	                "Source": "/var/lib/docker/volumes/newest-cni-966117/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-966117",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-966117",
	                "name.minikube.sigs.k8s.io": "newest-cni-966117",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fd932db9a04b78b87a56b9054e740df3804b263389389e471a2d701a446877fa",
	            "SandboxKey": "/var/run/docker/netns/fd932db9a04b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-966117": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b0e850a31badefb3e90f169f6c30ed87a36474bdd831092f642a334450d6990",
	                    "EndpointID": "9e99a2a4dcf9e083513ff48c6951f8f93080618b92d0905c18bed927485f35d4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "d2:19:ea:14:cd:7d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-966117",
	                        "bebeb5c4da8e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-966117 -n newest-cni-966117
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-966117 -n newest-cni-966117: exit status 2 (313.407475ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-966117 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable dashboard -p embed-certs-379362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                                               │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ image   │ no-preload-291522 image list --format=json                                                                                                                                                                                                           │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p no-preload-291522 --alsologtostderr -v=1                                                                                                                                                                                                          │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-361270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ image   │ old-k8s-version-234538 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ stop    │ -p default-k8s-diff-port-361270 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p old-k8s-version-234538 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ delete  │ -p no-preload-291522                                                                                                                                                                                                                                 │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p no-preload-291522                                                                                                                                                                                                                                 │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p newest-cni-966117 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p old-k8s-version-234538                                                                                                                                                                                                                            │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p old-k8s-version-234538                                                                                                                                                                                                                            │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-361270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-966117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ stop    │ -p newest-cni-966117 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ image   │ embed-certs-379362 image list --format=json                                                                                                                                                                                                          │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p embed-certs-379362 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-966117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p newest-cni-966117 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:12 UTC │
	│ delete  │ -p embed-certs-379362                                                                                                                                                                                                                                │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ delete  │ -p embed-certs-379362                                                                                                                                                                                                                                │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ image   │ newest-cni-966117 image list --format=json                                                                                                                                                                                                           │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ pause   │ -p newest-cni-966117 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:11:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:11:59.511350  348846 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:59.511449  348846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:59.511460  348846 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:59.511466  348846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:59.511676  348846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:11:59.512155  348846 out.go:368] Setting JSON to false
	I1213 09:11:59.513404  348846 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3271,"bootTime":1765613848,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:11:59.513457  348846 start.go:143] virtualization: kvm guest
	I1213 09:11:59.515473  348846 out.go:179] * [newest-cni-966117] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:11:59.516692  348846 notify.go:221] Checking for updates...
	I1213 09:11:59.516718  348846 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:11:59.518077  348846 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:11:59.519243  348846 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:59.520461  348846 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:11:59.521788  348846 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:11:59.523074  348846 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:11:59.524842  348846 config.go:182] Loaded profile config "newest-cni-966117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:11:59.525633  348846 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:11:59.549908  348846 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 09:11:59.550053  348846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:59.608860  348846 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-13 09:11:59.5995165 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:59.608976  348846 docker.go:319] overlay module found
	I1213 09:11:59.610766  348846 out.go:179] * Using the docker driver based on existing profile
	I1213 09:11:59.611993  348846 start.go:309] selected driver: docker
	I1213 09:11:59.612013  348846 start.go:927] validating driver "docker" against &{Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:59.612124  348846 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:11:59.612924  348846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:59.671889  348846 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-13 09:11:59.660935388 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:59.672219  348846 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 09:11:59.672248  348846 cni.go:84] Creating CNI manager for ""
	I1213 09:11:59.672318  348846 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:59.672376  348846 start.go:353] cluster config:
	{Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:59.674150  348846 out.go:179] * Starting "newest-cni-966117" primary control-plane node in "newest-cni-966117" cluster
	I1213 09:11:59.675254  348846 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 09:11:59.676366  348846 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:11:59.677312  348846 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 09:11:59.677346  348846 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 09:11:59.677357  348846 cache.go:65] Caching tarball of preloaded images
	I1213 09:11:59.677391  348846 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:11:59.677456  348846 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:11:59.677470  348846 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 09:11:59.677574  348846 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/config.json ...
	I1213 09:11:59.697910  348846 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:11:59.697929  348846 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:11:59.697958  348846 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:11:59.697996  348846 start.go:360] acquireMachinesLock for newest-cni-966117: {Name:mk2b636d64beae36e9b4be83e39d6514423d9194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:11:59.698084  348846 start.go:364] duration metric: took 46.374µs to acquireMachinesLock for "newest-cni-966117"
	I1213 09:11:59.698109  348846 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:11:59.698117  348846 fix.go:54] fixHost starting: 
	I1213 09:11:59.698377  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:11:59.716186  348846 fix.go:112] recreateIfNeeded on newest-cni-966117: state=Stopped err=<nil>
	W1213 09:11:59.716211  348846 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 09:11:58.872086  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:00.872161  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	I1213 09:11:59.717723  348846 out.go:252] * Restarting existing docker container for "newest-cni-966117" ...
	I1213 09:11:59.717793  348846 cli_runner.go:164] Run: docker start newest-cni-966117
	I1213 09:11:59.987095  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:00.008413  348846 kic.go:430] container "newest-cni-966117" state is running.
	I1213 09:12:00.008872  348846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-966117
	I1213 09:12:00.029442  348846 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/config.json ...
	I1213 09:12:00.029747  348846 machine.go:94] provisionDockerMachine start ...
	I1213 09:12:00.029825  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:00.049967  348846 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:00.050320  348846 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1213 09:12:00.050338  348846 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:12:00.050937  348846 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42964->127.0.0.1:33138: read: connection reset by peer
	I1213 09:12:03.188177  348846 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-966117
	
	I1213 09:12:03.188220  348846 ubuntu.go:182] provisioning hostname "newest-cni-966117"
	I1213 09:12:03.188304  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:03.208635  348846 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:03.208982  348846 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1213 09:12:03.209009  348846 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-966117 && echo "newest-cni-966117" | sudo tee /etc/hostname
	I1213 09:12:03.356451  348846 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-966117
	
	I1213 09:12:03.356550  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:03.377602  348846 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:03.377902  348846 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1213 09:12:03.377928  348846 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-966117' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-966117/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-966117' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:12:03.515384  348846 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:12:03.515414  348846 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-5776/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-5776/.minikube}
	I1213 09:12:03.515443  348846 ubuntu.go:190] setting up certificates
	I1213 09:12:03.515457  348846 provision.go:84] configureAuth start
	I1213 09:12:03.515533  348846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-966117
	I1213 09:12:03.535934  348846 provision.go:143] copyHostCerts
	I1213 09:12:03.536012  348846 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem, removing ...
	I1213 09:12:03.536028  348846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem
	I1213 09:12:03.536096  348846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem (1675 bytes)
	I1213 09:12:03.536187  348846 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem, removing ...
	I1213 09:12:03.536195  348846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem
	I1213 09:12:03.536232  348846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem (1082 bytes)
	I1213 09:12:03.536293  348846 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem, removing ...
	I1213 09:12:03.536301  348846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem
	I1213 09:12:03.536324  348846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem (1123 bytes)
	I1213 09:12:03.536386  348846 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem org=jenkins.newest-cni-966117 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-966117]
	I1213 09:12:03.747763  348846 provision.go:177] copyRemoteCerts
	I1213 09:12:03.747825  348846 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:12:03.747884  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:03.768773  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:03.867273  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 09:12:03.886803  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:12:03.905579  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:12:03.923712  348846 provision.go:87] duration metric: took 408.231151ms to configureAuth
	I1213 09:12:03.923746  348846 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:12:03.923916  348846 config.go:182] Loaded profile config "newest-cni-966117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:12:03.924009  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:03.944125  348846 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:03.944478  348846 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1213 09:12:03.944524  348846 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 09:12:04.251417  348846 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 09:12:04.251442  348846 machine.go:97] duration metric: took 4.221675747s to provisionDockerMachine
	I1213 09:12:04.251456  348846 start.go:293] postStartSetup for "newest-cni-966117" (driver="docker")
	I1213 09:12:04.251472  348846 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:12:04.251566  348846 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:12:04.251603  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:04.271923  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:04.377174  348846 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:12:04.380783  348846 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:12:04.380806  348846 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:12:04.380816  348846 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/addons for local assets ...
	I1213 09:12:04.380867  348846 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/files for local assets ...
	I1213 09:12:04.380942  348846 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem -> 93032.pem in /etc/ssl/certs
	I1213 09:12:04.381032  348846 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:12:04.388870  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:12:04.406744  348846 start.go:296] duration metric: took 155.274167ms for postStartSetup
	I1213 09:12:04.406824  348846 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:12:04.406859  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:04.425060  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:04.519117  348846 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:12:04.523740  348846 fix.go:56] duration metric: took 4.825619979s for fixHost
	I1213 09:12:04.523761  348846 start.go:83] releasing machines lock for "newest-cni-966117", held for 4.825662551s
	I1213 09:12:04.523813  348846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-966117
	I1213 09:12:04.542972  348846 ssh_runner.go:195] Run: cat /version.json
	I1213 09:12:04.543037  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:04.543070  348846 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:12:04.543152  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:04.562091  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:04.562364  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:04.707160  348846 ssh_runner.go:195] Run: systemctl --version
	I1213 09:12:04.714445  348846 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 09:12:04.750084  348846 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:12:04.755144  348846 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:12:04.755236  348846 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:12:04.763878  348846 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 09:12:04.763929  348846 start.go:496] detecting cgroup driver to use...
	I1213 09:12:04.763964  348846 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 09:12:04.764013  348846 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:12:04.778097  348846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:12:04.790942  348846 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:12:04.790991  348846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:12:04.805770  348846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:12:04.818577  348846 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:12:04.898219  348846 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:12:04.978617  348846 docker.go:234] disabling docker service ...
	I1213 09:12:04.978680  348846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:12:04.992928  348846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:12:05.005978  348846 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:12:05.088758  348846 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:12:05.171196  348846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:12:05.183599  348846 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:12:05.197833  348846 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 09:12:05.197897  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.206562  348846 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 09:12:05.206647  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.215907  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.224628  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.232991  348846 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:12:05.240720  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.249141  348846 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.257427  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.265929  348846 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:12:05.273133  348846 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:12:05.281944  348846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:12:05.385356  348846 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 09:12:05.520929  348846 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 09:12:05.521001  348846 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 09:12:05.526028  348846 start.go:564] Will wait 60s for crictl version
	I1213 09:12:05.526097  348846 ssh_runner.go:195] Run: which crictl
	I1213 09:12:05.529805  348846 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:12:05.555375  348846 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 09:12:05.555477  348846 ssh_runner.go:195] Run: crio --version
	I1213 09:12:05.584114  348846 ssh_runner.go:195] Run: crio --version
	I1213 09:12:05.615327  348846 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 09:12:05.616457  348846 cli_runner.go:164] Run: docker network inspect newest-cni-966117 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:12:05.635292  348846 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1213 09:12:05.639617  348846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:12:05.651081  348846 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 09:12:05.652314  348846 kubeadm.go:884] updating cluster {Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:12:05.652516  348846 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 09:12:05.652581  348846 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:12:05.687546  348846 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:12:05.687577  348846 crio.go:433] Images already preloaded, skipping extraction
	I1213 09:12:05.687628  348846 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:12:05.715637  348846 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:12:05.715657  348846 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:12:05.715664  348846 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 09:12:05.715759  348846 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-966117 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:12:05.715822  348846 ssh_runner.go:195] Run: crio config
	I1213 09:12:05.770581  348846 cni.go:84] Creating CNI manager for ""
	I1213 09:12:05.770606  348846 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:12:05.770621  348846 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 09:12:05.770642  348846 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-966117 NodeName:newest-cni-966117 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:12:05.770778  348846 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-966117"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:12:05.770848  348846 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 09:12:05.779678  348846 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:12:05.779739  348846 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:12:05.788501  348846 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 09:12:05.802841  348846 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 09:12:05.817454  348846 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1213 09:12:05.830458  348846 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:12:05.834197  348846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:12:05.845034  348846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:12:05.926720  348846 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:12:05.997523  348846 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117 for IP: 192.168.94.2
	I1213 09:12:05.997546  348846 certs.go:195] generating shared ca certs ...
	I1213 09:12:05.997566  348846 certs.go:227] acquiring lock for ca certs: {Name:mk80892d6e61f0acf7b97550b52b3341a040901c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:05.997713  348846 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key
	I1213 09:12:05.997768  348846 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key
	I1213 09:12:05.997783  348846 certs.go:257] generating profile certs ...
	I1213 09:12:05.997915  348846 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/client.key
	I1213 09:12:05.998006  348846 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/apiserver.key.4ee2f72f
	I1213 09:12:05.998061  348846 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/proxy-client.key
	I1213 09:12:05.998197  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem (1338 bytes)
	W1213 09:12:05.998243  348846 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303_empty.pem, impossibly tiny 0 bytes
	I1213 09:12:05.998258  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:12:05.998299  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:12:05.998335  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:12:05.998375  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem (1675 bytes)
	I1213 09:12:05.998435  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:12:05.999149  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:12:06.019574  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:12:06.039769  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:12:06.061044  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:12:06.086891  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 09:12:06.112254  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 09:12:06.130707  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:12:06.149223  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 09:12:06.166960  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:12:06.184981  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem --> /usr/share/ca-certificates/9303.pem (1338 bytes)
	I1213 09:12:06.204120  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /usr/share/ca-certificates/93032.pem (1708 bytes)
	I1213 09:12:06.224026  348846 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:12:06.237075  348846 ssh_runner.go:195] Run: openssl version
	I1213 09:12:06.244173  348846 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:06.252708  348846 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:12:06.260879  348846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:06.265095  348846 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:06.265166  348846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:06.301161  348846 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:12:06.309231  348846 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9303.pem
	I1213 09:12:06.316876  348846 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9303.pem /etc/ssl/certs/9303.pem
	I1213 09:12:06.324648  348846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9303.pem
	I1213 09:12:06.328583  348846 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:37 /usr/share/ca-certificates/9303.pem
	I1213 09:12:06.328649  348846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9303.pem
	I1213 09:12:06.363986  348846 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:12:06.373146  348846 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/93032.pem
	I1213 09:12:06.380694  348846 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/93032.pem /etc/ssl/certs/93032.pem
	I1213 09:12:06.388858  348846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93032.pem
	I1213 09:12:06.392640  348846 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:37 /usr/share/ca-certificates/93032.pem
	I1213 09:12:06.392699  348846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93032.pem
	I1213 09:12:06.427810  348846 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:12:06.435920  348846 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:12:06.440255  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:12:06.477879  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:12:06.517466  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:12:06.566357  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:12:06.614264  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:12:06.667715  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:12:06.721234  348846 kubeadm.go:401] StartCluster: {Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:12:06.721340  348846 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 09:12:06.721412  348846 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:12:06.758871  348846 cri.go:89] found id: "5b1856a3a07129909b63b61002c6a406d40ab25115690133ad9907c9af301d4e"
	I1213 09:12:06.758895  348846 cri.go:89] found id: "fac698cd1af50b220bc1f2a9b252b26dd2966e87440e25994d1c645cbd7820ff"
	I1213 09:12:06.758901  348846 cri.go:89] found id: "8807f33081db2b27421f17eee364e12fc581fe40c63b1e2f13e70468891cab09"
	I1213 09:12:06.758906  348846 cri.go:89] found id: "0345d6de3446b527dcd60a7b59c72bf14dad6b1213e3c592d7f413738cf10d19"
	I1213 09:12:06.758910  348846 cri.go:89] found id: ""
	I1213 09:12:06.758964  348846 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 09:12:06.773080  348846 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:12:06Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:12:06.773166  348846 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:12:06.784547  348846 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 09:12:06.784577  348846 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 09:12:06.784628  348846 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 09:12:06.795795  348846 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:12:06.796533  348846 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-966117" does not appear in /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:12:06.796828  348846 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-5776/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-966117" cluster setting kubeconfig missing "newest-cni-966117" context setting]
	I1213 09:12:06.797449  348846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:06.800252  348846 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 09:12:06.810290  348846 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1213 09:12:06.810327  348846 kubeadm.go:602] duration metric: took 25.742497ms to restartPrimaryControlPlane
	I1213 09:12:06.810339  348846 kubeadm.go:403] duration metric: took 89.114693ms to StartCluster
	I1213 09:12:06.810357  348846 settings.go:142] acquiring lock: {Name:mkb7db33d439ccf76620dab7051026432361ebb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:06.810417  348846 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:12:06.811517  348846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:06.811783  348846 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:12:06.811972  348846 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 09:12:06.812098  348846 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-966117"
	I1213 09:12:06.812122  348846 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-966117"
	I1213 09:12:06.812123  348846 config.go:182] Loaded profile config "newest-cni-966117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:12:06.812116  348846 addons.go:70] Setting dashboard=true in profile "newest-cni-966117"
	I1213 09:12:06.812148  348846 addons.go:239] Setting addon dashboard=true in "newest-cni-966117"
	I1213 09:12:06.812140  348846 addons.go:70] Setting default-storageclass=true in profile "newest-cni-966117"
	W1213 09:12:06.812157  348846 addons.go:248] addon dashboard should already be in state true
	I1213 09:12:06.812169  348846 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-966117"
	I1213 09:12:06.812199  348846 host.go:66] Checking if "newest-cni-966117" exists ...
	W1213 09:12:06.812131  348846 addons.go:248] addon storage-provisioner should already be in state true
	I1213 09:12:06.812253  348846 host.go:66] Checking if "newest-cni-966117" exists ...
	I1213 09:12:06.812524  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:06.812689  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:06.812745  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:06.816903  348846 out.go:179] * Verifying Kubernetes components...
	I1213 09:12:06.819064  348846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:12:06.840702  348846 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 09:12:06.842378  348846 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 09:12:06.842618  348846 addons.go:239] Setting addon default-storageclass=true in "newest-cni-966117"
	W1213 09:12:06.842633  348846 addons.go:248] addon default-storageclass should already be in state true
	I1213 09:12:06.842661  348846 host.go:66] Checking if "newest-cni-966117" exists ...
	I1213 09:12:06.843100  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:06.845434  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 09:12:06.845458  348846 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 09:12:06.845538  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:06.848056  348846 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1213 09:12:02.872197  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:05.371940  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	I1213 09:12:06.852360  348846 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:12:06.852383  348846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 09:12:06.852438  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:06.883177  348846 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 09:12:06.883209  348846 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 09:12:06.883277  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:06.884226  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:06.897411  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:06.909108  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:06.978460  348846 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:12:06.993222  348846 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:12:06.993297  348846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:12:06.997850  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 09:12:06.997871  348846 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 09:12:07.007126  348846 api_server.go:72] duration metric: took 195.312265ms to wait for apiserver process to appear ...
	I1213 09:12:07.007154  348846 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:12:07.007175  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:07.011914  348846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:12:07.014555  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 09:12:07.014577  348846 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 09:12:07.019771  348846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 09:12:07.029222  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 09:12:07.029247  348846 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 09:12:07.046837  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 09:12:07.046861  348846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 09:12:07.062365  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 09:12:07.062392  348846 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 09:12:07.078224  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 09:12:07.078262  348846 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 09:12:07.092750  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 09:12:07.092770  348846 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 09:12:07.110234  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 09:12:07.110262  348846 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 09:12:07.124880  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:12:07.124902  348846 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 09:12:07.140572  348846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:12:07.991252  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 09:12:07.991294  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 09:12:07.991314  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:08.004904  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 09:12:08.004937  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 09:12:08.008203  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:08.021272  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 09:12:08.021307  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 09:12:08.507813  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:08.512905  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:12:08.512932  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:12:08.545622  348846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.525818473s)
	I1213 09:12:08.545637  348846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.533695672s)
	I1213 09:12:08.545757  348846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.40514658s)
	I1213 09:12:08.547413  348846 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-966117 addons enable metrics-server
	
	I1213 09:12:08.557103  348846 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 09:12:08.558451  348846 addons.go:530] duration metric: took 1.746499358s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1213 09:12:09.007392  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:09.011715  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:12:09.011744  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:12:09.507347  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:09.511631  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1213 09:12:09.513046  348846 api_server.go:141] control plane version: v1.35.0-beta.0
	I1213 09:12:09.513079  348846 api_server.go:131] duration metric: took 2.505917364s to wait for apiserver health ...
	I1213 09:12:09.513097  348846 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:12:09.516862  348846 system_pods.go:59] 8 kube-system pods found
	I1213 09:12:09.516915  348846 system_pods.go:61] "coredns-7d764666f9-sk2nl" [37f2d8b3-7ed6-4e82-9143-7d913b7b5f77] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 09:12:09.516936  348846 system_pods.go:61] "etcd-newest-cni-966117" [d5f60407-9ff1-41b0-8842-112a9d4e4db9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:12:09.516944  348846 system_pods.go:61] "kindnet-4ccdw" [e37a84fb-6bb4-46c9-abd8-7faff492b11f] Running
	I1213 09:12:09.516951  348846 system_pods.go:61] "kube-apiserver-newest-cni-966117" [ca4879bf-a328-40f8-bd80-067ce393ba2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:12:09.516956  348846 system_pods.go:61] "kube-controller-manager-newest-cni-966117" [384bdaff-8ec0-437d-b7b2-9186a3d77d5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:12:09.516961  348846 system_pods.go:61] "kube-proxy-lnm62" [38b74d8a-68b4-4816-bec2-fad7da0471f8] Running
	I1213 09:12:09.516966  348846 system_pods.go:61] "kube-scheduler-newest-cni-966117" [16be3154-0cd9-494f-bdbf-d41819d2c1fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:12:09.516975  348846 system_pods.go:61] "storage-provisioner" [31d3def0-8e7d-4759-a1b9-0fad99271611] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 09:12:09.516980  348846 system_pods.go:74] duration metric: took 3.876843ms to wait for pod list to return data ...
	I1213 09:12:09.516989  348846 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:12:09.519603  348846 default_sa.go:45] found service account: "default"
	I1213 09:12:09.519627  348846 default_sa.go:55] duration metric: took 2.631674ms for default service account to be created ...
	I1213 09:12:09.519643  348846 kubeadm.go:587] duration metric: took 2.707831782s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 09:12:09.519662  348846 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:12:09.522032  348846 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:12:09.522053  348846 node_conditions.go:123] node cpu capacity is 8
	I1213 09:12:09.522067  348846 node_conditions.go:105] duration metric: took 2.401048ms to run NodePressure ...
	I1213 09:12:09.522078  348846 start.go:242] waiting for startup goroutines ...
	I1213 09:12:09.522084  348846 start.go:247] waiting for cluster config update ...
	I1213 09:12:09.522094  348846 start.go:256] writing updated cluster config ...
	I1213 09:12:09.522385  348846 ssh_runner.go:195] Run: rm -f paused
	I1213 09:12:09.569110  348846 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 09:12:09.570864  348846 out.go:179] * Done! kubectl is now configured to use "newest-cni-966117" cluster and "default" namespace by default
	W1213 09:12:07.870810  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:09.873311  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.392879221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.396404964Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=da3d5237-d889-4559-bd19-0a526c660be4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.397623961Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b13cdc02-6b24-4bb6-8834-0a1ce4087dee name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.399647611Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.400346099Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.401094631Z" level=info msg="Ran pod sandbox dbc7a9a1d245917290a2350c9c23ff918a1e86e980af673dc8b6dde60701dc75 with infra container: kube-system/kindnet-4ccdw/POD" id=b13cdc02-6b24-4bb6-8834-0a1ce4087dee name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.4012391Z" level=info msg="Ran pod sandbox 4a3628a805539237a83b62ae33178a1104342d21ed03b07204e12dc9f9ff063d with infra container: kube-system/kube-proxy-lnm62/POD" id=da3d5237-d889-4559-bd19-0a526c660be4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.403216315Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=4a76780d-3394-453d-9342-8ffd55954800 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.40337097Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=5c90e684-640b-4a60-a2a1-15ce7617cbd2 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.404321612Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0050f7fd-e4e2-4ee9-aace-77c83782d7d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.404608967Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=01581eab-6f2a-4997-a70b-d53a7f059823 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.405407432Z" level=info msg="Creating container: kube-system/kindnet-4ccdw/kindnet-cni" id=215ba5c9-2778-4f7e-aa5a-3aa309b1fad4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.405526337Z" level=info msg="Creating container: kube-system/kube-proxy-lnm62/kube-proxy" id=74143166-e56b-4b83-a40a-532c2d0ab6a7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.405635212Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.405533678Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.410153302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.41069086Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.410769775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.411217908Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.440174607Z" level=info msg="Created container 41a7a56c9eb619da41037920c85dbbafa98381fb7ccff7e3a621b31d4c46d1d0: kube-system/kindnet-4ccdw/kindnet-cni" id=215ba5c9-2778-4f7e-aa5a-3aa309b1fad4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.440871905Z" level=info msg="Starting container: 41a7a56c9eb619da41037920c85dbbafa98381fb7ccff7e3a621b31d4c46d1d0" id=fd514a15-1bd0-451e-a6fd-954bf21af68c name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.443076602Z" level=info msg="Started container" PID=1052 containerID=41a7a56c9eb619da41037920c85dbbafa98381fb7ccff7e3a621b31d4c46d1d0 description=kube-system/kindnet-4ccdw/kindnet-cni id=fd514a15-1bd0-451e-a6fd-954bf21af68c name=/runtime.v1.RuntimeService/StartContainer sandboxID=dbc7a9a1d245917290a2350c9c23ff918a1e86e980af673dc8b6dde60701dc75
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.444971181Z" level=info msg="Created container f04db8598d999ee50ac06594270b168936ff0e0a39202c2c67cfc236cc6a39fa: kube-system/kube-proxy-lnm62/kube-proxy" id=74143166-e56b-4b83-a40a-532c2d0ab6a7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.445624924Z" level=info msg="Starting container: f04db8598d999ee50ac06594270b168936ff0e0a39202c2c67cfc236cc6a39fa" id=900bbe82-6efb-439f-bfeb-b2de88c7ab35 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:12:08 newest-cni-966117 crio[522]: time="2025-12-13T09:12:08.44843872Z" level=info msg="Started container" PID=1053 containerID=f04db8598d999ee50ac06594270b168936ff0e0a39202c2c67cfc236cc6a39fa description=kube-system/kube-proxy-lnm62/kube-proxy id=900bbe82-6efb-439f-bfeb-b2de88c7ab35 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a3628a805539237a83b62ae33178a1104342d21ed03b07204e12dc9f9ff063d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	41a7a56c9eb61       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   dbc7a9a1d2459       kindnet-4ccdw                               kube-system
	f04db8598d999       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   6 seconds ago       Running             kube-proxy                1                   4a3628a805539       kube-proxy-lnm62                            kube-system
	5b1856a3a0712       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   7 seconds ago       Running             kube-scheduler            1                   23d6ff2b19b45       kube-scheduler-newest-cni-966117            kube-system
	fac698cd1af50       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   8 seconds ago       Running             kube-apiserver            1                   9eaa28c3fcb95       kube-apiserver-newest-cni-966117            kube-system
	8807f33081db2       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   8 seconds ago       Running             etcd                      1                   7029ba7fec568       etcd-newest-cni-966117                      kube-system
	0345d6de3446b       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   8 seconds ago       Running             kube-controller-manager   1                   17db9ca69bf6e       kube-controller-manager-newest-cni-966117   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-966117
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-966117
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=newest-cni-966117
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_11_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:11:39 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-966117
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:12:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:12:08 +0000   Sat, 13 Dec 2025 09:11:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:12:08 +0000   Sat, 13 Dec 2025 09:11:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:12:08 +0000   Sat, 13 Dec 2025 09:11:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 13 Dec 2025 09:12:08 +0000   Sat, 13 Dec 2025 09:11:37 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-966117
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                26d31992-b6d2-4fe0-bab3-2d88f6d863be
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-966117                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-4ccdw                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-966117             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-966117    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-lnm62                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-966117             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node newest-cni-966117 event: Registered Node newest-cni-966117 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-966117 event: Registered Node newest-cni-966117 in Controller
	
	
	==> dmesg <==
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[Dec13 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 92 30 31 19 d5 08 06
	[  +0.000699] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 37 88 97 25 dd 08 06
	[ +15.632521] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[ +20.216508] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 f7 62 38 81 f0 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[  +0.111008] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 c8 f7 8d 1a 5f 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[ +12.586108] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de cc 9f d9 2d 23 08 06
	[  +0.043792] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	[Dec13 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 62 87 e9 6b d3 37 08 06
	[  +0.000424] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	
	
	==> etcd [8807f33081db2b27421f17eee364e12fc581fe40c63b1e2f13e70468891cab09] <==
	{"level":"warn","ts":"2025-12-13T09:12:07.381867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.387777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.394059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.405066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.412739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.419252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.425440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.431770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.438101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.450599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.464668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.470803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.477182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.483693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.489894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.497935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.504119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.510248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.516555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.522649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.530341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.543815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.556085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.562354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:12:07.616172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52720","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:12:14 up 54 min,  0 user,  load average: 2.57, 3.22, 2.33
	Linux newest-cni-966117 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [41a7a56c9eb619da41037920c85dbbafa98381fb7ccff7e3a621b31d4c46d1d0] <==
	I1213 09:12:08.660013       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 09:12:08.660292       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1213 09:12:08.660425       1 main.go:148] setting mtu 1500 for CNI 
	I1213 09:12:08.660440       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 09:12:08.660463       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T09:12:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 09:12:08.859773       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 09:12:08.860684       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 09:12:08.860702       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 09:12:08.860868       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 09:12:09.261310       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 09:12:09.261342       1 metrics.go:72] Registering metrics
	I1213 09:12:09.261418       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [fac698cd1af50b220bc1f2a9b252b26dd2966e87440e25994d1c645cbd7820ff] <==
	I1213 09:12:08.073587       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 09:12:08.073594       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 09:12:08.073601       1 cache.go:39] Caches are synced for autoregister controller
	I1213 09:12:08.073685       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 09:12:08.073447       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:08.074262       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1213 09:12:08.074327       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1213 09:12:08.080703       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 09:12:08.096152       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 09:12:08.100759       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1213 09:12:08.107955       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:08.107980       1 policy_source.go:248] refreshing policies
	I1213 09:12:08.116600       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:12:08.123841       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:12:08.335521       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:12:08.361785       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:12:08.398374       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:12:08.407105       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:12:08.458341       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.42.77"}
	I1213 09:12:08.471084       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.103.231"}
	I1213 09:12:08.978177       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1213 09:12:11.674671       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1213 09:12:11.726234       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:12:11.825366       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:12:11.927299       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [0345d6de3446b527dcd60a7b59c72bf14dad6b1213e3c592d7f413738cf10d19] <==
	I1213 09:12:11.226775       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.226760       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.227359       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.227441       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.227538       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1213 09:12:11.226514       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1213 09:12:11.227628       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-966117"
	I1213 09:12:11.227677       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1213 09:12:11.227694       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.227724       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.226774       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.226785       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.226802       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.227634       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:12:11.228137       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.228162       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.226793       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.226765       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.228630       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.232242       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.233838       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:12:11.326710       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:11.326729       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1213 09:12:11.326733       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1213 09:12:11.334082       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [f04db8598d999ee50ac06594270b168936ff0e0a39202c2c67cfc236cc6a39fa] <==
	I1213 09:12:08.488364       1 server_linux.go:53] "Using iptables proxy"
	I1213 09:12:08.554756       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:12:08.654960       1 shared_informer.go:377] "Caches are synced"
	I1213 09:12:08.654998       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1213 09:12:08.655085       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:12:08.674142       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 09:12:08.674210       1 server_linux.go:136] "Using iptables Proxier"
	I1213 09:12:08.679418       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:12:08.679888       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1213 09:12:08.679931       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:12:08.682354       1 config.go:200] "Starting service config controller"
	I1213 09:12:08.682379       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:12:08.682455       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:12:08.682477       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:12:08.682461       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:12:08.682508       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:12:08.682564       1 config.go:309] "Starting node config controller"
	I1213 09:12:08.682584       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:12:08.682593       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:12:08.782592       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:12:08.782614       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1213 09:12:08.782633       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5b1856a3a07129909b63b61002c6a406d40ab25115690133ad9907c9af301d4e] <==
	I1213 09:12:06.849221       1 serving.go:386] Generated self-signed cert in-memory
	W1213 09:12:07.992370       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 09:12:07.992537       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 09:12:07.992593       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 09:12:07.992608       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 09:12:08.032216       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1213 09:12:08.032315       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:12:08.035141       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:12:08.035184       1 shared_informer.go:370] "Waiting for caches to sync"
	I1213 09:12:08.035322       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 09:12:08.036161       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 09:12:08.136124       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.113849     674 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-966117"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.113929     674 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-966117"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.114051     674 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-966117"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.120433     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e37a84fb-6bb4-46c9-abd8-7faff492b11f-cni-cfg\") pod \"kindnet-4ccdw\" (UID: \"e37a84fb-6bb4-46c9-abd8-7faff492b11f\") " pod="kube-system/kindnet-4ccdw"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.120480     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e37a84fb-6bb4-46c9-abd8-7faff492b11f-lib-modules\") pod \"kindnet-4ccdw\" (UID: \"e37a84fb-6bb4-46c9-abd8-7faff492b11f\") " pod="kube-system/kindnet-4ccdw"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.120544     674 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-966117"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.120627     674 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-966117"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.120661     674 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.120801     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e37a84fb-6bb4-46c9-abd8-7faff492b11f-xtables-lock\") pod \"kindnet-4ccdw\" (UID: \"e37a84fb-6bb4-46c9-abd8-7faff492b11f\") " pod="kube-system/kindnet-4ccdw"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.120846     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38b74d8a-68b4-4816-bec2-fad7da0471f8-lib-modules\") pod \"kube-proxy-lnm62\" (UID: \"38b74d8a-68b4-4816-bec2-fad7da0471f8\") " pod="kube-system/kube-proxy-lnm62"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.120898     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38b74d8a-68b4-4816-bec2-fad7da0471f8-xtables-lock\") pod \"kube-proxy-lnm62\" (UID: \"38b74d8a-68b4-4816-bec2-fad7da0471f8\") " pod="kube-system/kube-proxy-lnm62"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: I1213 09:12:08.121675     674 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: E1213 09:12:08.124818     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-966117\" already exists" pod="kube-system/kube-scheduler-newest-cni-966117"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: E1213 09:12:08.124896     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-966117" containerName="kube-scheduler"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: E1213 09:12:08.126064     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-966117\" already exists" pod="kube-system/etcd-newest-cni-966117"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: E1213 09:12:08.126135     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-966117" containerName="etcd"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: E1213 09:12:08.126516     674 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-966117\" already exists" pod="kube-system/kube-apiserver-newest-cni-966117"
	Dec 13 09:12:08 newest-cni-966117 kubelet[674]: E1213 09:12:08.126590     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-966117" containerName="kube-apiserver"
	Dec 13 09:12:09 newest-cni-966117 kubelet[674]: E1213 09:12:09.119999     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-966117" containerName="etcd"
	Dec 13 09:12:09 newest-cni-966117 kubelet[674]: E1213 09:12:09.120086     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-966117" containerName="kube-scheduler"
	Dec 13 09:12:09 newest-cni-966117 kubelet[674]: E1213 09:12:09.120247     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-966117" containerName="kube-apiserver"
	Dec 13 09:12:09 newest-cni-966117 kubelet[674]: E1213 09:12:09.978128     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-966117" containerName="kube-controller-manager"
	Dec 13 09:12:10 newest-cni-966117 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 09:12:10 newest-cni-966117 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 09:12:10 newest-cni-966117 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-966117 -n newest-cni-966117
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-966117 -n newest-cni-966117: exit status 2 (320.179367ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-966117 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-sk2nl storage-provisioner dashboard-metrics-scraper-867fb5f87b-srzqp kubernetes-dashboard-b84665fb8-pfj5v
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-966117 describe pod coredns-7d764666f9-sk2nl storage-provisioner dashboard-metrics-scraper-867fb5f87b-srzqp kubernetes-dashboard-b84665fb8-pfj5v
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-966117 describe pod coredns-7d764666f9-sk2nl storage-provisioner dashboard-metrics-scraper-867fb5f87b-srzqp kubernetes-dashboard-b84665fb8-pfj5v: exit status 1 (61.010059ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-sk2nl" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-srzqp" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-pfj5v" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-966117 describe pod coredns-7d764666f9-sk2nl storage-provisioner dashboard-metrics-scraper-867fb5f87b-srzqp kubernetes-dashboard-b84665fb8-pfj5v: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-361270 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-361270 --alsologtostderr -v=1: exit status 80 (2.195764177s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-361270 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:12:33.504006  354794 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:12:33.504118  354794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:12:33.504126  354794 out.go:374] Setting ErrFile to fd 2...
	I1213 09:12:33.504130  354794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:12:33.504349  354794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:12:33.504593  354794 out.go:368] Setting JSON to false
	I1213 09:12:33.504613  354794 mustload.go:66] Loading cluster: default-k8s-diff-port-361270
	I1213 09:12:33.504957  354794 config.go:182] Loaded profile config "default-k8s-diff-port-361270": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:12:33.505316  354794 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-361270 --format={{.State.Status}}
	I1213 09:12:33.524008  354794 host.go:66] Checking if "default-k8s-diff-port-361270" exists ...
	I1213 09:12:33.524358  354794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:12:33.580620  354794 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-13 09:12:33.571038566 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:12:33.581250  354794 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765481609-22101/minikube-v1.37.0-1765481609-22101-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765481609-22101-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-361270 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1213 09:12:33.584286  354794 out.go:179] * Pausing node default-k8s-diff-port-361270 ... 
	I1213 09:12:33.585449  354794 host.go:66] Checking if "default-k8s-diff-port-361270" exists ...
	I1213 09:12:33.585763  354794 ssh_runner.go:195] Run: systemctl --version
	I1213 09:12:33.585808  354794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-361270
	I1213 09:12:33.603848  354794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/default-k8s-diff-port-361270/id_rsa Username:docker}
	I1213 09:12:33.698195  354794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:12:33.710340  354794 pause.go:52] kubelet running: true
	I1213 09:12:33.710408  354794 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:12:33.870370  354794 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:12:33.870471  354794 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:12:33.936430  354794 cri.go:89] found id: "3a6ccc8828213f06a483a07664d5f51e72982e311edfd72fc6bcd8fbd8700f7e"
	I1213 09:12:33.936465  354794 cri.go:89] found id: "f395b3df7c1ccb9efcadb608e29c264a29c3bd8dd9965a2a84d56baa7a9d46c7"
	I1213 09:12:33.936470  354794 cri.go:89] found id: "147013bf941aad42127bb9c3fc06f64f6dcdb530987d16e48af23ce8e5c42fd6"
	I1213 09:12:33.936473  354794 cri.go:89] found id: "644b03e3b5bd5141175cd1b667e7768c8d77c84ab6933e03e2d69cd7805a7e95"
	I1213 09:12:33.936476  354794 cri.go:89] found id: "8f8bf95e6ad87c53d72320ca46c7c701c44a0050543308bd67d34281350550ec"
	I1213 09:12:33.936495  354794 cri.go:89] found id: "d2c1b6b0bb4e9a0a4e33bae972a4b5976a7891a6b479c3ae241164f8934c8e1c"
	I1213 09:12:33.936499  354794 cri.go:89] found id: "12825df66baeab8e929d1992ff9bc015a6642f6e42c0188514ffa0a437bc96b6"
	I1213 09:12:33.936503  354794 cri.go:89] found id: "173e64f97cc32e0b4a6c94b6c29bf08fb8f903ffe154756eed2c3b98e5f27ab8"
	I1213 09:12:33.936508  354794 cri.go:89] found id: "1fa5b689652f2df6d1cdd70f81cf2ca28db6a2f1cdc1b09638a4e2aac8c69c47"
	I1213 09:12:33.936526  354794 cri.go:89] found id: "6b22b815fcc62c2eeec5b1f94f879887f46d8ba6f261b607594c09045b3f8e69"
	I1213 09:12:33.936537  354794 cri.go:89] found id: "5453ca1cc46c20adc190b658c4c3524bf8d7f1cb6177172bb2f6ee3054a7dfb7"
	I1213 09:12:33.936541  354794 cri.go:89] found id: ""
	I1213 09:12:33.936586  354794 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:12:33.948647  354794 retry.go:31] will retry after 151.599139ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:12:33Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:12:34.101055  354794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:12:34.113771  354794 pause.go:52] kubelet running: false
	I1213 09:12:34.113820  354794 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:12:34.251337  354794 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:12:34.251404  354794 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:12:34.315929  354794 cri.go:89] found id: "3a6ccc8828213f06a483a07664d5f51e72982e311edfd72fc6bcd8fbd8700f7e"
	I1213 09:12:34.315951  354794 cri.go:89] found id: "f395b3df7c1ccb9efcadb608e29c264a29c3bd8dd9965a2a84d56baa7a9d46c7"
	I1213 09:12:34.315956  354794 cri.go:89] found id: "147013bf941aad42127bb9c3fc06f64f6dcdb530987d16e48af23ce8e5c42fd6"
	I1213 09:12:34.315959  354794 cri.go:89] found id: "644b03e3b5bd5141175cd1b667e7768c8d77c84ab6933e03e2d69cd7805a7e95"
	I1213 09:12:34.315972  354794 cri.go:89] found id: "8f8bf95e6ad87c53d72320ca46c7c701c44a0050543308bd67d34281350550ec"
	I1213 09:12:34.315976  354794 cri.go:89] found id: "d2c1b6b0bb4e9a0a4e33bae972a4b5976a7891a6b479c3ae241164f8934c8e1c"
	I1213 09:12:34.315979  354794 cri.go:89] found id: "12825df66baeab8e929d1992ff9bc015a6642f6e42c0188514ffa0a437bc96b6"
	I1213 09:12:34.315981  354794 cri.go:89] found id: "173e64f97cc32e0b4a6c94b6c29bf08fb8f903ffe154756eed2c3b98e5f27ab8"
	I1213 09:12:34.315984  354794 cri.go:89] found id: "1fa5b689652f2df6d1cdd70f81cf2ca28db6a2f1cdc1b09638a4e2aac8c69c47"
	I1213 09:12:34.315991  354794 cri.go:89] found id: "6b22b815fcc62c2eeec5b1f94f879887f46d8ba6f261b607594c09045b3f8e69"
	I1213 09:12:34.315997  354794 cri.go:89] found id: "5453ca1cc46c20adc190b658c4c3524bf8d7f1cb6177172bb2f6ee3054a7dfb7"
	I1213 09:12:34.316000  354794 cri.go:89] found id: ""
	I1213 09:12:34.316036  354794 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:12:34.327943  354794 retry.go:31] will retry after 408.710968ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:12:34Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:12:34.737587  354794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:12:34.750423  354794 pause.go:52] kubelet running: false
	I1213 09:12:34.750473  354794 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:12:34.897036  354794 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:12:34.897103  354794 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:12:34.963395  354794 cri.go:89] found id: "3a6ccc8828213f06a483a07664d5f51e72982e311edfd72fc6bcd8fbd8700f7e"
	I1213 09:12:34.963420  354794 cri.go:89] found id: "f395b3df7c1ccb9efcadb608e29c264a29c3bd8dd9965a2a84d56baa7a9d46c7"
	I1213 09:12:34.963425  354794 cri.go:89] found id: "147013bf941aad42127bb9c3fc06f64f6dcdb530987d16e48af23ce8e5c42fd6"
	I1213 09:12:34.963429  354794 cri.go:89] found id: "644b03e3b5bd5141175cd1b667e7768c8d77c84ab6933e03e2d69cd7805a7e95"
	I1213 09:12:34.963431  354794 cri.go:89] found id: "8f8bf95e6ad87c53d72320ca46c7c701c44a0050543308bd67d34281350550ec"
	I1213 09:12:34.963435  354794 cri.go:89] found id: "d2c1b6b0bb4e9a0a4e33bae972a4b5976a7891a6b479c3ae241164f8934c8e1c"
	I1213 09:12:34.963438  354794 cri.go:89] found id: "12825df66baeab8e929d1992ff9bc015a6642f6e42c0188514ffa0a437bc96b6"
	I1213 09:12:34.963440  354794 cri.go:89] found id: "173e64f97cc32e0b4a6c94b6c29bf08fb8f903ffe154756eed2c3b98e5f27ab8"
	I1213 09:12:34.963443  354794 cri.go:89] found id: "1fa5b689652f2df6d1cdd70f81cf2ca28db6a2f1cdc1b09638a4e2aac8c69c47"
	I1213 09:12:34.963462  354794 cri.go:89] found id: "6b22b815fcc62c2eeec5b1f94f879887f46d8ba6f261b607594c09045b3f8e69"
	I1213 09:12:34.963469  354794 cri.go:89] found id: "5453ca1cc46c20adc190b658c4c3524bf8d7f1cb6177172bb2f6ee3054a7dfb7"
	I1213 09:12:34.963473  354794 cri.go:89] found id: ""
	I1213 09:12:34.963535  354794 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:12:34.974632  354794 retry.go:31] will retry after 423.627791ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:12:34Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:12:35.399332  354794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 09:12:35.412137  354794 pause.go:52] kubelet running: false
	I1213 09:12:35.412216  354794 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1213 09:12:35.554933  354794 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1213 09:12:35.555009  354794 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1213 09:12:35.619700  354794 cri.go:89] found id: "3a6ccc8828213f06a483a07664d5f51e72982e311edfd72fc6bcd8fbd8700f7e"
	I1213 09:12:35.619724  354794 cri.go:89] found id: "f395b3df7c1ccb9efcadb608e29c264a29c3bd8dd9965a2a84d56baa7a9d46c7"
	I1213 09:12:35.619729  354794 cri.go:89] found id: "147013bf941aad42127bb9c3fc06f64f6dcdb530987d16e48af23ce8e5c42fd6"
	I1213 09:12:35.619734  354794 cri.go:89] found id: "644b03e3b5bd5141175cd1b667e7768c8d77c84ab6933e03e2d69cd7805a7e95"
	I1213 09:12:35.619738  354794 cri.go:89] found id: "8f8bf95e6ad87c53d72320ca46c7c701c44a0050543308bd67d34281350550ec"
	I1213 09:12:35.619741  354794 cri.go:89] found id: "d2c1b6b0bb4e9a0a4e33bae972a4b5976a7891a6b479c3ae241164f8934c8e1c"
	I1213 09:12:35.619744  354794 cri.go:89] found id: "12825df66baeab8e929d1992ff9bc015a6642f6e42c0188514ffa0a437bc96b6"
	I1213 09:12:35.619746  354794 cri.go:89] found id: "173e64f97cc32e0b4a6c94b6c29bf08fb8f903ffe154756eed2c3b98e5f27ab8"
	I1213 09:12:35.619749  354794 cri.go:89] found id: "1fa5b689652f2df6d1cdd70f81cf2ca28db6a2f1cdc1b09638a4e2aac8c69c47"
	I1213 09:12:35.619755  354794 cri.go:89] found id: "6b22b815fcc62c2eeec5b1f94f879887f46d8ba6f261b607594c09045b3f8e69"
	I1213 09:12:35.619758  354794 cri.go:89] found id: "5453ca1cc46c20adc190b658c4c3524bf8d7f1cb6177172bb2f6ee3054a7dfb7"
	I1213 09:12:35.619760  354794 cri.go:89] found id: ""
	I1213 09:12:35.619797  354794 ssh_runner.go:195] Run: sudo runc list -f json
	I1213 09:12:35.633681  354794 out.go:203] 
	W1213 09:12:35.634925  354794 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:12:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:12:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1213 09:12:35.634960  354794 out.go:285] * 
	* 
	W1213 09:12:35.639102  354794 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1213 09:12:35.640399  354794 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-361270 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-361270
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-361270:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122",
	        "Created": "2025-12-13T09:10:34.393520957Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 344313,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:11:37.31386389Z",
	            "FinishedAt": "2025-12-13T09:11:36.389776801Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122/hostname",
	        "HostsPath": "/var/lib/docker/containers/33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122/hosts",
	        "LogPath": "/var/lib/docker/containers/33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122/33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122-json.log",
	        "Name": "/default-k8s-diff-port-361270",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-361270:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-361270",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122",
	                "LowerDir": "/var/lib/docker/overlay2/eaeb52f2095d7e5f8986a69d2edbe8afe0a205bb9fc051803008936187282ad8-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eaeb52f2095d7e5f8986a69d2edbe8afe0a205bb9fc051803008936187282ad8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eaeb52f2095d7e5f8986a69d2edbe8afe0a205bb9fc051803008936187282ad8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eaeb52f2095d7e5f8986a69d2edbe8afe0a205bb9fc051803008936187282ad8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-361270",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-361270/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-361270",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-361270",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-361270",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "809d7190a787208f65da73870e92f7900386cf427b14f5af234f66cf278068ac",
	            "SandboxKey": "/var/run/docker/netns/809d7190a787",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-361270": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6564b4ae9b49d7064dcdd83dbabbc1dc234669ef37d48771177caa6ad8786081",
	                    "EndpointID": "9815a3c563edd9c12d398197959dbf52ebcd4d03df51f8d5a5029860ed49631b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "3a:13:9e:14:02:81",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-361270",
	                        "33e3412677dd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-361270 -n default-k8s-diff-port-361270
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-361270 -n default-k8s-diff-port-361270: exit status 2 (316.624353ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-361270 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-361270 logs -n 25: (1.055494329s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-361270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ image   │ old-k8s-version-234538 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ stop    │ -p default-k8s-diff-port-361270 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p old-k8s-version-234538 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ delete  │ -p no-preload-291522                                                                                                                                                                                                                                 │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p no-preload-291522                                                                                                                                                                                                                                 │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p newest-cni-966117 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p old-k8s-version-234538                                                                                                                                                                                                                            │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p old-k8s-version-234538                                                                                                                                                                                                                            │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-361270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:12 UTC │
	│ addons  │ enable metrics-server -p newest-cni-966117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ stop    │ -p newest-cni-966117 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ image   │ embed-certs-379362 image list --format=json                                                                                                                                                                                                          │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p embed-certs-379362 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-966117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p newest-cni-966117 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:12 UTC │
	│ delete  │ -p embed-certs-379362                                                                                                                                                                                                                                │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ delete  │ -p embed-certs-379362                                                                                                                                                                                                                                │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ image   │ newest-cni-966117 image list --format=json                                                                                                                                                                                                           │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ pause   │ -p newest-cni-966117 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │                     │
	│ delete  │ -p newest-cni-966117                                                                                                                                                                                                                                 │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ delete  │ -p newest-cni-966117                                                                                                                                                                                                                                 │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ image   │ default-k8s-diff-port-361270 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ pause   │ -p default-k8s-diff-port-361270 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:11:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:11:59.511350  348846 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:59.511449  348846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:59.511460  348846 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:59.511466  348846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:59.511676  348846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:11:59.512155  348846 out.go:368] Setting JSON to false
	I1213 09:11:59.513404  348846 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3271,"bootTime":1765613848,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:11:59.513457  348846 start.go:143] virtualization: kvm guest
	I1213 09:11:59.515473  348846 out.go:179] * [newest-cni-966117] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:11:59.516692  348846 notify.go:221] Checking for updates...
	I1213 09:11:59.516718  348846 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:11:59.518077  348846 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:11:59.519243  348846 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:59.520461  348846 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:11:59.521788  348846 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:11:59.523074  348846 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:11:59.524842  348846 config.go:182] Loaded profile config "newest-cni-966117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:11:59.525633  348846 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:11:59.549908  348846 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 09:11:59.550053  348846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:59.608860  348846 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-13 09:11:59.5995165 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:59.608976  348846 docker.go:319] overlay module found
	I1213 09:11:59.610766  348846 out.go:179] * Using the docker driver based on existing profile
	I1213 09:11:59.611993  348846 start.go:309] selected driver: docker
	I1213 09:11:59.612013  348846 start.go:927] validating driver "docker" against &{Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:59.612124  348846 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:11:59.612924  348846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:59.671889  348846 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-13 09:11:59.660935388 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:59.672219  348846 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 09:11:59.672248  348846 cni.go:84] Creating CNI manager for ""
	I1213 09:11:59.672318  348846 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:59.672376  348846 start.go:353] cluster config:
	{Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:59.674150  348846 out.go:179] * Starting "newest-cni-966117" primary control-plane node in "newest-cni-966117" cluster
	I1213 09:11:59.675254  348846 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 09:11:59.676366  348846 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:11:59.677312  348846 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 09:11:59.677346  348846 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 09:11:59.677357  348846 cache.go:65] Caching tarball of preloaded images
	I1213 09:11:59.677391  348846 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:11:59.677456  348846 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:11:59.677470  348846 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 09:11:59.677574  348846 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/config.json ...
	I1213 09:11:59.697910  348846 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:11:59.697929  348846 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:11:59.697958  348846 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:11:59.697996  348846 start.go:360] acquireMachinesLock for newest-cni-966117: {Name:mk2b636d64beae36e9b4be83e39d6514423d9194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:11:59.698084  348846 start.go:364] duration metric: took 46.374µs to acquireMachinesLock for "newest-cni-966117"
	I1213 09:11:59.698109  348846 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:11:59.698117  348846 fix.go:54] fixHost starting: 
	I1213 09:11:59.698377  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:11:59.716186  348846 fix.go:112] recreateIfNeeded on newest-cni-966117: state=Stopped err=<nil>
	W1213 09:11:59.716211  348846 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 09:11:58.872086  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:00.872161  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	I1213 09:11:59.717723  348846 out.go:252] * Restarting existing docker container for "newest-cni-966117" ...
	I1213 09:11:59.717793  348846 cli_runner.go:164] Run: docker start newest-cni-966117
	I1213 09:11:59.987095  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:00.008413  348846 kic.go:430] container "newest-cni-966117" state is running.
	I1213 09:12:00.008872  348846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-966117
	I1213 09:12:00.029442  348846 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/config.json ...
	I1213 09:12:00.029747  348846 machine.go:94] provisionDockerMachine start ...
	I1213 09:12:00.029825  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:00.049967  348846 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:00.050320  348846 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1213 09:12:00.050338  348846 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:12:00.050937  348846 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42964->127.0.0.1:33138: read: connection reset by peer
	I1213 09:12:03.188177  348846 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-966117
	
	I1213 09:12:03.188220  348846 ubuntu.go:182] provisioning hostname "newest-cni-966117"
	I1213 09:12:03.188304  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:03.208635  348846 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:03.208982  348846 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1213 09:12:03.209009  348846 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-966117 && echo "newest-cni-966117" | sudo tee /etc/hostname
	I1213 09:12:03.356451  348846 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-966117
	
	I1213 09:12:03.356550  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:03.377602  348846 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:03.377902  348846 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1213 09:12:03.377928  348846 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-966117' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-966117/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-966117' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:12:03.515384  348846 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:12:03.515414  348846 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-5776/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-5776/.minikube}
	I1213 09:12:03.515443  348846 ubuntu.go:190] setting up certificates
	I1213 09:12:03.515457  348846 provision.go:84] configureAuth start
	I1213 09:12:03.515533  348846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-966117
	I1213 09:12:03.535934  348846 provision.go:143] copyHostCerts
	I1213 09:12:03.536012  348846 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem, removing ...
	I1213 09:12:03.536028  348846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem
	I1213 09:12:03.536096  348846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem (1675 bytes)
	I1213 09:12:03.536187  348846 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem, removing ...
	I1213 09:12:03.536195  348846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem
	I1213 09:12:03.536232  348846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem (1082 bytes)
	I1213 09:12:03.536293  348846 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem, removing ...
	I1213 09:12:03.536301  348846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem
	I1213 09:12:03.536324  348846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem (1123 bytes)
	I1213 09:12:03.536386  348846 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem org=jenkins.newest-cni-966117 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-966117]
	I1213 09:12:03.747763  348846 provision.go:177] copyRemoteCerts
	I1213 09:12:03.747825  348846 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:12:03.747884  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:03.768773  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:03.867273  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 09:12:03.886803  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:12:03.905579  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:12:03.923712  348846 provision.go:87] duration metric: took 408.231151ms to configureAuth
	I1213 09:12:03.923746  348846 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:12:03.923916  348846 config.go:182] Loaded profile config "newest-cni-966117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:12:03.924009  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:03.944125  348846 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:03.944478  348846 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1213 09:12:03.944524  348846 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 09:12:04.251417  348846 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 09:12:04.251442  348846 machine.go:97] duration metric: took 4.221675747s to provisionDockerMachine
	I1213 09:12:04.251456  348846 start.go:293] postStartSetup for "newest-cni-966117" (driver="docker")
	I1213 09:12:04.251472  348846 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:12:04.251566  348846 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:12:04.251603  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:04.271923  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:04.377174  348846 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:12:04.380783  348846 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:12:04.380806  348846 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:12:04.380816  348846 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/addons for local assets ...
	I1213 09:12:04.380867  348846 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/files for local assets ...
	I1213 09:12:04.380942  348846 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem -> 93032.pem in /etc/ssl/certs
	I1213 09:12:04.381032  348846 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:12:04.388870  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:12:04.406744  348846 start.go:296] duration metric: took 155.274167ms for postStartSetup
	I1213 09:12:04.406824  348846 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:12:04.406859  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:04.425060  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:04.519117  348846 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:12:04.523740  348846 fix.go:56] duration metric: took 4.825619979s for fixHost
	I1213 09:12:04.523761  348846 start.go:83] releasing machines lock for "newest-cni-966117", held for 4.825662551s
	I1213 09:12:04.523813  348846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-966117
	I1213 09:12:04.542972  348846 ssh_runner.go:195] Run: cat /version.json
	I1213 09:12:04.543037  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:04.543070  348846 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:12:04.543152  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:04.562091  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:04.562364  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:04.707160  348846 ssh_runner.go:195] Run: systemctl --version
	I1213 09:12:04.714445  348846 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 09:12:04.750084  348846 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:12:04.755144  348846 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:12:04.755236  348846 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:12:04.763878  348846 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 09:12:04.763929  348846 start.go:496] detecting cgroup driver to use...
	I1213 09:12:04.763964  348846 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 09:12:04.764013  348846 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:12:04.778097  348846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:12:04.790942  348846 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:12:04.790991  348846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:12:04.805770  348846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:12:04.818577  348846 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:12:04.898219  348846 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:12:04.978617  348846 docker.go:234] disabling docker service ...
	I1213 09:12:04.978680  348846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:12:04.992928  348846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:12:05.005978  348846 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:12:05.088758  348846 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:12:05.171196  348846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:12:05.183599  348846 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:12:05.197833  348846 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 09:12:05.197897  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.206562  348846 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 09:12:05.206647  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.215907  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.224628  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.232991  348846 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:12:05.240720  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.249141  348846 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.257427  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.265929  348846 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:12:05.273133  348846 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:12:05.281944  348846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:12:05.385356  348846 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 09:12:05.520929  348846 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 09:12:05.521001  348846 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 09:12:05.526028  348846 start.go:564] Will wait 60s for crictl version
	I1213 09:12:05.526097  348846 ssh_runner.go:195] Run: which crictl
	I1213 09:12:05.529805  348846 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:12:05.555375  348846 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 09:12:05.555477  348846 ssh_runner.go:195] Run: crio --version
	I1213 09:12:05.584114  348846 ssh_runner.go:195] Run: crio --version
	I1213 09:12:05.615327  348846 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 09:12:05.616457  348846 cli_runner.go:164] Run: docker network inspect newest-cni-966117 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:12:05.635292  348846 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1213 09:12:05.639617  348846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:12:05.651081  348846 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 09:12:05.652314  348846 kubeadm.go:884] updating cluster {Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:12:05.652516  348846 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 09:12:05.652581  348846 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:12:05.687546  348846 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:12:05.687577  348846 crio.go:433] Images already preloaded, skipping extraction
	I1213 09:12:05.687628  348846 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:12:05.715637  348846 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:12:05.715657  348846 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:12:05.715664  348846 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 09:12:05.715759  348846 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-966117 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:12:05.715822  348846 ssh_runner.go:195] Run: crio config
	I1213 09:12:05.770581  348846 cni.go:84] Creating CNI manager for ""
	I1213 09:12:05.770606  348846 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:12:05.770621  348846 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 09:12:05.770642  348846 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-966117 NodeName:newest-cni-966117 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:12:05.770778  348846 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-966117"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:12:05.770848  348846 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 09:12:05.779678  348846 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:12:05.779739  348846 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:12:05.788501  348846 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 09:12:05.802841  348846 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 09:12:05.817454  348846 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1213 09:12:05.830458  348846 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:12:05.834197  348846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:12:05.845034  348846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:12:05.926720  348846 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:12:05.997523  348846 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117 for IP: 192.168.94.2
	I1213 09:12:05.997546  348846 certs.go:195] generating shared ca certs ...
	I1213 09:12:05.997566  348846 certs.go:227] acquiring lock for ca certs: {Name:mk80892d6e61f0acf7b97550b52b3341a040901c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:05.997713  348846 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key
	I1213 09:12:05.997768  348846 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key
	I1213 09:12:05.997783  348846 certs.go:257] generating profile certs ...
	I1213 09:12:05.997915  348846 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/client.key
	I1213 09:12:05.998006  348846 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/apiserver.key.4ee2f72f
	I1213 09:12:05.998061  348846 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/proxy-client.key
	I1213 09:12:05.998197  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem (1338 bytes)
	W1213 09:12:05.998243  348846 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303_empty.pem, impossibly tiny 0 bytes
	I1213 09:12:05.998258  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:12:05.998299  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:12:05.998335  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:12:05.998375  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem (1675 bytes)
	I1213 09:12:05.998435  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:12:05.999149  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:12:06.019574  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:12:06.039769  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:12:06.061044  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:12:06.086891  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 09:12:06.112254  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 09:12:06.130707  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:12:06.149223  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 09:12:06.166960  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:12:06.184981  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem --> /usr/share/ca-certificates/9303.pem (1338 bytes)
	I1213 09:12:06.204120  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /usr/share/ca-certificates/93032.pem (1708 bytes)
	I1213 09:12:06.224026  348846 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:12:06.237075  348846 ssh_runner.go:195] Run: openssl version
	I1213 09:12:06.244173  348846 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:06.252708  348846 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:12:06.260879  348846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:06.265095  348846 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:06.265166  348846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:06.301161  348846 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:12:06.309231  348846 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9303.pem
	I1213 09:12:06.316876  348846 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9303.pem /etc/ssl/certs/9303.pem
	I1213 09:12:06.324648  348846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9303.pem
	I1213 09:12:06.328583  348846 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:37 /usr/share/ca-certificates/9303.pem
	I1213 09:12:06.328649  348846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9303.pem
	I1213 09:12:06.363986  348846 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:12:06.373146  348846 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/93032.pem
	I1213 09:12:06.380694  348846 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/93032.pem /etc/ssl/certs/93032.pem
	I1213 09:12:06.388858  348846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93032.pem
	I1213 09:12:06.392640  348846 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:37 /usr/share/ca-certificates/93032.pem
	I1213 09:12:06.392699  348846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93032.pem
	I1213 09:12:06.427810  348846 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:12:06.435920  348846 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:12:06.440255  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:12:06.477879  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:12:06.517466  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:12:06.566357  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:12:06.614264  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:12:06.667715  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:12:06.721234  348846 kubeadm.go:401] StartCluster: {Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:12:06.721340  348846 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 09:12:06.721412  348846 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:12:06.758871  348846 cri.go:89] found id: "5b1856a3a07129909b63b61002c6a406d40ab25115690133ad9907c9af301d4e"
	I1213 09:12:06.758895  348846 cri.go:89] found id: "fac698cd1af50b220bc1f2a9b252b26dd2966e87440e25994d1c645cbd7820ff"
	I1213 09:12:06.758901  348846 cri.go:89] found id: "8807f33081db2b27421f17eee364e12fc581fe40c63b1e2f13e70468891cab09"
	I1213 09:12:06.758906  348846 cri.go:89] found id: "0345d6de3446b527dcd60a7b59c72bf14dad6b1213e3c592d7f413738cf10d19"
	I1213 09:12:06.758910  348846 cri.go:89] found id: ""
	I1213 09:12:06.758964  348846 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 09:12:06.773080  348846 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:12:06Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:12:06.773166  348846 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:12:06.784547  348846 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 09:12:06.784577  348846 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 09:12:06.784628  348846 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 09:12:06.795795  348846 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:12:06.796533  348846 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-966117" does not appear in /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:12:06.796828  348846 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-5776/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-966117" cluster setting kubeconfig missing "newest-cni-966117" context setting]
	I1213 09:12:06.797449  348846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:06.800252  348846 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 09:12:06.810290  348846 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1213 09:12:06.810327  348846 kubeadm.go:602] duration metric: took 25.742497ms to restartPrimaryControlPlane
	I1213 09:12:06.810339  348846 kubeadm.go:403] duration metric: took 89.114693ms to StartCluster
	I1213 09:12:06.810357  348846 settings.go:142] acquiring lock: {Name:mkb7db33d439ccf76620dab7051026432361ebb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:06.810417  348846 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:12:06.811517  348846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:06.811783  348846 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:12:06.811972  348846 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 09:12:06.812098  348846 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-966117"
	I1213 09:12:06.812122  348846 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-966117"
	I1213 09:12:06.812123  348846 config.go:182] Loaded profile config "newest-cni-966117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:12:06.812116  348846 addons.go:70] Setting dashboard=true in profile "newest-cni-966117"
	I1213 09:12:06.812148  348846 addons.go:239] Setting addon dashboard=true in "newest-cni-966117"
	I1213 09:12:06.812140  348846 addons.go:70] Setting default-storageclass=true in profile "newest-cni-966117"
	W1213 09:12:06.812157  348846 addons.go:248] addon dashboard should already be in state true
	I1213 09:12:06.812169  348846 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-966117"
	I1213 09:12:06.812199  348846 host.go:66] Checking if "newest-cni-966117" exists ...
	W1213 09:12:06.812131  348846 addons.go:248] addon storage-provisioner should already be in state true
	I1213 09:12:06.812253  348846 host.go:66] Checking if "newest-cni-966117" exists ...
	I1213 09:12:06.812524  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:06.812689  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:06.812745  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:06.816903  348846 out.go:179] * Verifying Kubernetes components...
	I1213 09:12:06.819064  348846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:12:06.840702  348846 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 09:12:06.842378  348846 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 09:12:06.842618  348846 addons.go:239] Setting addon default-storageclass=true in "newest-cni-966117"
	W1213 09:12:06.842633  348846 addons.go:248] addon default-storageclass should already be in state true
	I1213 09:12:06.842661  348846 host.go:66] Checking if "newest-cni-966117" exists ...
	I1213 09:12:06.843100  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:06.845434  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 09:12:06.845458  348846 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 09:12:06.845538  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:06.848056  348846 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1213 09:12:02.872197  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:05.371940  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	I1213 09:12:06.852360  348846 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:12:06.852383  348846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 09:12:06.852438  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:06.883177  348846 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 09:12:06.883209  348846 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 09:12:06.883277  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:06.884226  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:06.897411  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:06.909108  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:06.978460  348846 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:12:06.993222  348846 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:12:06.993297  348846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:12:06.997850  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 09:12:06.997871  348846 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 09:12:07.007126  348846 api_server.go:72] duration metric: took 195.312265ms to wait for apiserver process to appear ...
	I1213 09:12:07.007154  348846 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:12:07.007175  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:07.011914  348846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:12:07.014555  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 09:12:07.014577  348846 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 09:12:07.019771  348846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 09:12:07.029222  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 09:12:07.029247  348846 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 09:12:07.046837  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 09:12:07.046861  348846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 09:12:07.062365  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 09:12:07.062392  348846 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 09:12:07.078224  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 09:12:07.078262  348846 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 09:12:07.092750  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 09:12:07.092770  348846 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 09:12:07.110234  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 09:12:07.110262  348846 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 09:12:07.124880  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:12:07.124902  348846 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 09:12:07.140572  348846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:12:07.991252  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 09:12:07.991294  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 09:12:07.991314  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:08.004904  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 09:12:08.004937  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 09:12:08.008203  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:08.021272  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 09:12:08.021307  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 09:12:08.507813  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:08.512905  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:12:08.512932  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:12:08.545622  348846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.525818473s)
	I1213 09:12:08.545637  348846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.533695672s)
	I1213 09:12:08.545757  348846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.40514658s)
	I1213 09:12:08.547413  348846 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-966117 addons enable metrics-server
	
	I1213 09:12:08.557103  348846 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 09:12:08.558451  348846 addons.go:530] duration metric: took 1.746499358s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1213 09:12:09.007392  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:09.011715  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:12:09.011744  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:12:09.507347  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:09.511631  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1213 09:12:09.513046  348846 api_server.go:141] control plane version: v1.35.0-beta.0
	I1213 09:12:09.513079  348846 api_server.go:131] duration metric: took 2.505917364s to wait for apiserver health ...
	I1213 09:12:09.513097  348846 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:12:09.516862  348846 system_pods.go:59] 8 kube-system pods found
	I1213 09:12:09.516915  348846 system_pods.go:61] "coredns-7d764666f9-sk2nl" [37f2d8b3-7ed6-4e82-9143-7d913b7b5f77] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 09:12:09.516936  348846 system_pods.go:61] "etcd-newest-cni-966117" [d5f60407-9ff1-41b0-8842-112a9d4e4db9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:12:09.516944  348846 system_pods.go:61] "kindnet-4ccdw" [e37a84fb-6bb4-46c9-abd8-7faff492b11f] Running
	I1213 09:12:09.516951  348846 system_pods.go:61] "kube-apiserver-newest-cni-966117" [ca4879bf-a328-40f8-bd80-067ce393ba2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:12:09.516956  348846 system_pods.go:61] "kube-controller-manager-newest-cni-966117" [384bdaff-8ec0-437d-b7b2-9186a3d77d5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:12:09.516961  348846 system_pods.go:61] "kube-proxy-lnm62" [38b74d8a-68b4-4816-bec2-fad7da0471f8] Running
	I1213 09:12:09.516966  348846 system_pods.go:61] "kube-scheduler-newest-cni-966117" [16be3154-0cd9-494f-bdbf-d41819d2c1fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:12:09.516975  348846 system_pods.go:61] "storage-provisioner" [31d3def0-8e7d-4759-a1b9-0fad99271611] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 09:12:09.516980  348846 system_pods.go:74] duration metric: took 3.876843ms to wait for pod list to return data ...
	I1213 09:12:09.516989  348846 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:12:09.519603  348846 default_sa.go:45] found service account: "default"
	I1213 09:12:09.519627  348846 default_sa.go:55] duration metric: took 2.631674ms for default service account to be created ...
	I1213 09:12:09.519643  348846 kubeadm.go:587] duration metric: took 2.707831782s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 09:12:09.519662  348846 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:12:09.522032  348846 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:12:09.522053  348846 node_conditions.go:123] node cpu capacity is 8
	I1213 09:12:09.522067  348846 node_conditions.go:105] duration metric: took 2.401048ms to run NodePressure ...
	I1213 09:12:09.522078  348846 start.go:242] waiting for startup goroutines ...
	I1213 09:12:09.522084  348846 start.go:247] waiting for cluster config update ...
	I1213 09:12:09.522094  348846 start.go:256] writing updated cluster config ...
	I1213 09:12:09.522385  348846 ssh_runner.go:195] Run: rm -f paused
	I1213 09:12:09.569110  348846 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 09:12:09.570864  348846 out.go:179] * Done! kubectl is now configured to use "newest-cni-966117" cluster and "default" namespace by default
	W1213 09:12:07.870810  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:09.873311  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:12.373184  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:14.871194  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:16.872364  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:19.370156  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	I1213 09:12:20.370628  344087 pod_ready.go:94] pod "coredns-66bc5c9577-xhjmn" is "Ready"
	I1213 09:12:20.370655  344087 pod_ready.go:86] duration metric: took 33.00509608s for pod "coredns-66bc5c9577-xhjmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:20.372949  344087 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:20.376301  344087 pod_ready.go:94] pod "etcd-default-k8s-diff-port-361270" is "Ready"
	I1213 09:12:20.376318  344087 pod_ready.go:86] duration metric: took 3.345709ms for pod "etcd-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:20.378179  344087 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:20.394829  344087 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-361270" is "Ready"
	I1213 09:12:20.394858  344087 pod_ready.go:86] duration metric: took 16.650618ms for pod "kube-apiserver-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:20.398454  344087 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:20.569256  344087 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-361270" is "Ready"
	I1213 09:12:20.569282  344087 pod_ready.go:86] duration metric: took 170.8099ms for pod "kube-controller-manager-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:20.769393  344087 pod_ready.go:83] waiting for pod "kube-proxy-78nr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:21.169597  344087 pod_ready.go:94] pod "kube-proxy-78nr2" is "Ready"
	I1213 09:12:21.169627  344087 pod_ready.go:86] duration metric: took 400.213054ms for pod "kube-proxy-78nr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:21.368623  344087 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:21.768635  344087 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-361270" is "Ready"
	I1213 09:12:21.768661  344087 pod_ready.go:86] duration metric: took 400.016263ms for pod "kube-scheduler-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:21.768672  344087 pod_ready.go:40] duration metric: took 34.406964078s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:12:21.813431  344087 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 09:12:21.815222  344087 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-361270" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 09:11:57 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:11:57.472115031Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 09:11:57 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:11:57.47526199Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 09:11:57 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:11:57.475280853Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.624279699Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5774705f-b188-4dd3-9b7e-1c741e5b3bc5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.627605938Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7231714b-fc55-4602-a3ef-2e0e9ff3160d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.63087266Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9/dashboard-metrics-scraper" id=8bb3c4bb-11c2-415d-aad8-13be2a74a992 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.631021188Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.638347613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.638822518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.668449806Z" level=info msg="Created container 6b22b815fcc62c2eeec5b1f94f879887f46d8ba6f261b607594c09045b3f8e69: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9/dashboard-metrics-scraper" id=8bb3c4bb-11c2-415d-aad8-13be2a74a992 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.669141406Z" level=info msg="Starting container: 6b22b815fcc62c2eeec5b1f94f879887f46d8ba6f261b607594c09045b3f8e69" id=eb6a1f09-88b3-493b-991a-c604a568f8f5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.671430473Z" level=info msg="Started container" PID=1765 containerID=6b22b815fcc62c2eeec5b1f94f879887f46d8ba6f261b607594c09045b3f8e69 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9/dashboard-metrics-scraper id=eb6a1f09-88b3-493b-991a-c604a568f8f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e56a5b9e6025b719bf5ea35f1852469d5adf047dbd51ce43ebab0af9fb1471ff
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.728986504Z" level=info msg="Removing container: b05849820837388400707505c12256efe4ffd285d208950577e844a10a2d2317" id=e31b21da-3f24-4f91-8797-6bfe75226f14 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.739169941Z" level=info msg="Removed container b05849820837388400707505c12256efe4ffd285d208950577e844a10a2d2317: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9/dashboard-metrics-scraper" id=e31b21da-3f24-4f91-8797-6bfe75226f14 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.750315015Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ffec7155-5ab4-4e30-b04b-3f3f262c0ef0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.751304399Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=de1ac95e-d83e-48b6-a211-6229860f3b1e name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.752443538Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=0eaa9e49-1a12-473c-af8d-87f40e1a5597 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.752640063Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.756961348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.757156826Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1b1299fa65af51b4da5f0dd1b99fdb55877f8662a95c67cba6893f235488d069/merged/etc/passwd: no such file or directory"
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.757191674Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1b1299fa65af51b4da5f0dd1b99fdb55877f8662a95c67cba6893f235488d069/merged/etc/group: no such file or directory"
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.757558612Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.789707886Z" level=info msg="Created container 3a6ccc8828213f06a483a07664d5f51e72982e311edfd72fc6bcd8fbd8700f7e: kube-system/storage-provisioner/storage-provisioner" id=0eaa9e49-1a12-473c-af8d-87f40e1a5597 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.790339073Z" level=info msg="Starting container: 3a6ccc8828213f06a483a07664d5f51e72982e311edfd72fc6bcd8fbd8700f7e" id=c61958d6-d371-4c5e-a079-90afab7007e8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.792080448Z" level=info msg="Started container" PID=1779 containerID=3a6ccc8828213f06a483a07664d5f51e72982e311edfd72fc6bcd8fbd8700f7e description=kube-system/storage-provisioner/storage-provisioner id=c61958d6-d371-4c5e-a079-90afab7007e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2891790bb3f3da9d453bca277f59ea9144a95900f01103983e500236b84f1c01
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	3a6ccc8828213       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   2891790bb3f3d       storage-provisioner                                    kube-system
	6b22b815fcc62       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   e56a5b9e6025b       dashboard-metrics-scraper-6ffb444bf9-zbfg9             kubernetes-dashboard
	5453ca1cc46c2       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   96a9bccb5c573       kubernetes-dashboard-855c9754f9-ww2pb                  kubernetes-dashboard
	f395b3df7c1cc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   feae305ccac4c       coredns-66bc5c9577-xhjmn                               kube-system
	731745ea7066b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   f31f7a886109f       busybox                                                default
	147013bf941aa       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   4aa49ddf9df13       kindnet-g6h8g                                          kube-system
	644b03e3b5bd5       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           49 seconds ago      Running             kube-proxy                  0                   fae728b6ea51f       kube-proxy-78nr2                                       kube-system
	8f8bf95e6ad87       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   2891790bb3f3d       storage-provisioner                                    kube-system
	d2c1b6b0bb4e9       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           52 seconds ago      Running             kube-controller-manager     0                   d2fea5f8d5902       kube-controller-manager-default-k8s-diff-port-361270   kube-system
	12825df66baea       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           52 seconds ago      Running             kube-apiserver              0                   b2cc705fa3482       kube-apiserver-default-k8s-diff-port-361270            kube-system
	173e64f97cc32       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           52 seconds ago      Running             kube-scheduler              0                   a0cd64e66f35d       kube-scheduler-default-k8s-diff-port-361270            kube-system
	1fa5b689652f2       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           52 seconds ago      Running             etcd                        0                   a4d3ac7b061d3       etcd-default-k8s-diff-port-361270                      kube-system
	
	
	==> coredns [f395b3df7c1ccb9efcadb608e29c264a29c3bd8dd9965a2a84d56baa7a9d46c7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41381 - 15283 "HINFO IN 6362231300460410859.867155602260387096. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.437041004s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-361270
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-361270
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=default-k8s-diff-port-361270
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_10_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:10:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-361270
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:12:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:12:26 +0000   Sat, 13 Dec 2025 09:10:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:12:26 +0000   Sat, 13 Dec 2025 09:10:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:12:26 +0000   Sat, 13 Dec 2025 09:10:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:12:26 +0000   Sat, 13 Dec 2025 09:11:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-361270
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                2dd9bf5a-9012-41ec-b7a7-58f5e5034374
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-66bc5c9577-xhjmn                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-default-k8s-diff-port-361270                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         107s
	  kube-system                 kindnet-g6h8g                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      101s
	  kube-system                 kube-apiserver-default-k8s-diff-port-361270             250m (3%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-361270    200m (2%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-proxy-78nr2                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-default-k8s-diff-port-361270             100m (1%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zbfg9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ww2pb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 100s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  NodeHasSufficientMemory  107s               kubelet          Node default-k8s-diff-port-361270 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s               kubelet          Node default-k8s-diff-port-361270 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s               kubelet          Node default-k8s-diff-port-361270 status is now: NodeHasSufficientPID
	  Normal  Starting                 107s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           102s               node-controller  Node default-k8s-diff-port-361270 event: Registered Node default-k8s-diff-port-361270 in Controller
	  Normal  NodeReady                90s                kubelet          Node default-k8s-diff-port-361270 status is now: NodeReady
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)  kubelet          Node default-k8s-diff-port-361270 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)  kubelet          Node default-k8s-diff-port-361270 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)  kubelet          Node default-k8s-diff-port-361270 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node default-k8s-diff-port-361270 event: Registered Node default-k8s-diff-port-361270 in Controller
	
	
	==> dmesg <==
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[Dec13 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 92 30 31 19 d5 08 06
	[  +0.000699] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 37 88 97 25 dd 08 06
	[ +15.632521] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[ +20.216508] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 f7 62 38 81 f0 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[  +0.111008] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 c8 f7 8d 1a 5f 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[ +12.586108] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de cc 9f d9 2d 23 08 06
	[  +0.043792] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	[Dec13 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 62 87 e9 6b d3 37 08 06
	[  +0.000424] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	
	
	==> etcd [1fa5b689652f2df6d1cdd70f81cf2ca28db6a2f1cdc1b09638a4e2aac8c69c47] <==
	{"level":"warn","ts":"2025-12-13T09:11:45.104567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.114524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.121897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.129514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.137718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.145428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.154894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.162975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.171784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.184722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.192945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.201235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.209616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.216944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.224475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.232153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.238700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.245707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.252199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.259623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.267012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.287853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.295614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.302153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.350994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39304","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:12:36 up 55 min,  0 user,  load average: 1.84, 3.01, 2.28
	Linux default-k8s-diff-port-361270 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [147013bf941aad42127bb9c3fc06f64f6dcdb530987d16e48af23ce8e5c42fd6] <==
	I1213 09:11:47.260810       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 09:11:47.261049       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1213 09:11:47.261227       1 main.go:148] setting mtu 1500 for CNI 
	I1213 09:11:47.261244       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 09:11:47.261270       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T09:11:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 09:11:47.460018       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 09:11:47.460049       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 09:11:47.460059       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 09:11:47.460201       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 09:11:47.960706       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 09:11:47.960815       1 metrics.go:72] Registering metrics
	I1213 09:11:47.960905       1 controller.go:711] "Syncing nftables rules"
	I1213 09:11:57.460533       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 09:11:57.460615       1 main.go:301] handling current node
	I1213 09:12:07.459629       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 09:12:07.459665       1 main.go:301] handling current node
	I1213 09:12:17.459580       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 09:12:17.459615       1 main.go:301] handling current node
	I1213 09:12:27.459545       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 09:12:27.459628       1 main.go:301] handling current node
	
	
	==> kube-apiserver [12825df66baeab8e929d1992ff9bc015a6642f6e42c0188514ffa0a437bc96b6] <==
	I1213 09:11:45.824911       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 09:11:45.824923       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1213 09:11:45.824934       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 09:11:45.824946       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 09:11:45.824963       1 aggregator.go:171] initial CRD sync complete...
	I1213 09:11:45.824972       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 09:11:45.824977       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 09:11:45.824911       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 09:11:45.824983       1 cache.go:39] Caches are synced for autoregister controller
	I1213 09:11:45.825035       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 09:11:45.825019       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 09:11:45.826812       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:11:45.831834       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 09:11:45.852822       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:11:46.110907       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:11:46.141163       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:11:46.183035       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:11:46.190335       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:11:46.198398       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:11:46.229833       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.113.46"}
	I1213 09:11:46.238299       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.103.84"}
	I1213 09:11:46.734460       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 09:11:49.511598       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:11:49.559981       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:11:49.660902       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d2c1b6b0bb4e9a0a4e33bae972a4b5976a7891a6b479c3ae241164f8934c8e1c] <==
	I1213 09:11:49.157288       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 09:11:49.157308       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 09:11:49.157375       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 09:11:49.157413       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 09:11:49.157608       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 09:11:49.157611       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 09:11:49.157989       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 09:11:49.157991       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 09:11:49.158290       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 09:11:49.162057       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:11:49.162083       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 09:11:49.162075       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 09:11:49.164290       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1213 09:11:49.164327       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:11:49.164338       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1213 09:11:49.164367       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1213 09:11:49.164377       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1213 09:11:49.164384       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 09:11:49.165542       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 09:11:49.168890       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 09:11:49.173164       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 09:11:49.174367       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:11:49.175453       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 09:11:49.181692       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 09:11:49.184966       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [644b03e3b5bd5141175cd1b667e7768c8d77c84ab6933e03e2d69cd7805a7e95] <==
	I1213 09:11:47.045164       1 server_linux.go:53] "Using iptables proxy"
	I1213 09:11:47.122331       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:11:47.222437       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:11:47.222472       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1213 09:11:47.222588       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:11:47.240362       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 09:11:47.240403       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:11:47.245155       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:11:47.245520       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:11:47.245556       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:11:47.246725       1 config.go:200] "Starting service config controller"
	I1213 09:11:47.246755       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:11:47.246780       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:11:47.246799       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:11:47.246890       1 config.go:309] "Starting node config controller"
	I1213 09:11:47.246904       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:11:47.247014       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:11:47.247042       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:11:47.347687       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:11:47.347725       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:11:47.347734       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:11:47.347800       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [173e64f97cc32e0b4a6c94b6c29bf08fb8f903ffe154756eed2c3b98e5f27ab8] <==
	I1213 09:11:44.615248       1 serving.go:386] Generated self-signed cert in-memory
	W1213 09:11:45.740793       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 09:11:45.741265       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 09:11:45.741294       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 09:11:45.741394       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 09:11:45.780244       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 09:11:45.780344       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:11:45.783541       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:11:45.784002       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:11:45.783782       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 09:11:45.783760       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 09:11:45.884608       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 09:11:49 default-k8s-diff-port-361270 kubelet[733]: I1213 09:11:49.878694     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf7rr\" (UniqueName: \"kubernetes.io/projected/237a7343-83ad-4a5f-9093-de528e47ff9f-kube-api-access-lf7rr\") pod \"kubernetes-dashboard-855c9754f9-ww2pb\" (UID: \"237a7343-83ad-4a5f-9093-de528e47ff9f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ww2pb"
	Dec 13 09:11:50 default-k8s-diff-port-361270 kubelet[733]: I1213 09:11:50.205149     733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 13 09:11:52 default-k8s-diff-port-361270 kubelet[733]: I1213 09:11:52.675806     733 scope.go:117] "RemoveContainer" containerID="09d58cc242da9f787421582038fcd52bfad306c1dbe31718caeb7d93929c1564"
	Dec 13 09:11:53 default-k8s-diff-port-361270 kubelet[733]: I1213 09:11:53.680561     733 scope.go:117] "RemoveContainer" containerID="09d58cc242da9f787421582038fcd52bfad306c1dbe31718caeb7d93929c1564"
	Dec 13 09:11:53 default-k8s-diff-port-361270 kubelet[733]: I1213 09:11:53.680680     733 scope.go:117] "RemoveContainer" containerID="b05849820837388400707505c12256efe4ffd285d208950577e844a10a2d2317"
	Dec 13 09:11:53 default-k8s-diff-port-361270 kubelet[733]: E1213 09:11:53.680895     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zbfg9_kubernetes-dashboard(7109d2a7-f380-4510-a00d-3c9ead5275cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9" podUID="7109d2a7-f380-4510-a00d-3c9ead5275cc"
	Dec 13 09:11:54 default-k8s-diff-port-361270 kubelet[733]: I1213 09:11:54.683756     733 scope.go:117] "RemoveContainer" containerID="b05849820837388400707505c12256efe4ffd285d208950577e844a10a2d2317"
	Dec 13 09:11:54 default-k8s-diff-port-361270 kubelet[733]: E1213 09:11:54.683963     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zbfg9_kubernetes-dashboard(7109d2a7-f380-4510-a00d-3c9ead5275cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9" podUID="7109d2a7-f380-4510-a00d-3c9ead5275cc"
	Dec 13 09:11:55 default-k8s-diff-port-361270 kubelet[733]: I1213 09:11:55.699096     733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ww2pb" podStartSLOduration=1.620764124 podStartE2EDuration="6.699066383s" podCreationTimestamp="2025-12-13 09:11:49 +0000 UTC" firstStartedPulling="2025-12-13 09:11:50.107480169 +0000 UTC m=+6.581835028" lastFinishedPulling="2025-12-13 09:11:55.185782417 +0000 UTC m=+11.660137287" observedRunningTime="2025-12-13 09:11:55.698667443 +0000 UTC m=+12.173022322" watchObservedRunningTime="2025-12-13 09:11:55.699066383 +0000 UTC m=+12.173421261"
	Dec 13 09:11:55 default-k8s-diff-port-361270 kubelet[733]: I1213 09:11:55.942744     733 scope.go:117] "RemoveContainer" containerID="b05849820837388400707505c12256efe4ffd285d208950577e844a10a2d2317"
	Dec 13 09:11:55 default-k8s-diff-port-361270 kubelet[733]: E1213 09:11:55.942915     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zbfg9_kubernetes-dashboard(7109d2a7-f380-4510-a00d-3c9ead5275cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9" podUID="7109d2a7-f380-4510-a00d-3c9ead5275cc"
	Dec 13 09:12:09 default-k8s-diff-port-361270 kubelet[733]: I1213 09:12:09.623746     733 scope.go:117] "RemoveContainer" containerID="b05849820837388400707505c12256efe4ffd285d208950577e844a10a2d2317"
	Dec 13 09:12:09 default-k8s-diff-port-361270 kubelet[733]: I1213 09:12:09.727608     733 scope.go:117] "RemoveContainer" containerID="b05849820837388400707505c12256efe4ffd285d208950577e844a10a2d2317"
	Dec 13 09:12:09 default-k8s-diff-port-361270 kubelet[733]: I1213 09:12:09.727885     733 scope.go:117] "RemoveContainer" containerID="6b22b815fcc62c2eeec5b1f94f879887f46d8ba6f261b607594c09045b3f8e69"
	Dec 13 09:12:09 default-k8s-diff-port-361270 kubelet[733]: E1213 09:12:09.728100     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zbfg9_kubernetes-dashboard(7109d2a7-f380-4510-a00d-3c9ead5275cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9" podUID="7109d2a7-f380-4510-a00d-3c9ead5275cc"
	Dec 13 09:12:15 default-k8s-diff-port-361270 kubelet[733]: I1213 09:12:15.943073     733 scope.go:117] "RemoveContainer" containerID="6b22b815fcc62c2eeec5b1f94f879887f46d8ba6f261b607594c09045b3f8e69"
	Dec 13 09:12:15 default-k8s-diff-port-361270 kubelet[733]: E1213 09:12:15.943266     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zbfg9_kubernetes-dashboard(7109d2a7-f380-4510-a00d-3c9ead5275cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9" podUID="7109d2a7-f380-4510-a00d-3c9ead5275cc"
	Dec 13 09:12:17 default-k8s-diff-port-361270 kubelet[733]: I1213 09:12:17.749922     733 scope.go:117] "RemoveContainer" containerID="8f8bf95e6ad87c53d72320ca46c7c701c44a0050543308bd67d34281350550ec"
	Dec 13 09:12:27 default-k8s-diff-port-361270 kubelet[733]: I1213 09:12:27.623983     733 scope.go:117] "RemoveContainer" containerID="6b22b815fcc62c2eeec5b1f94f879887f46d8ba6f261b607594c09045b3f8e69"
	Dec 13 09:12:27 default-k8s-diff-port-361270 kubelet[733]: E1213 09:12:27.624181     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zbfg9_kubernetes-dashboard(7109d2a7-f380-4510-a00d-3c9ead5275cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9" podUID="7109d2a7-f380-4510-a00d-3c9ead5275cc"
	Dec 13 09:12:33 default-k8s-diff-port-361270 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 09:12:33 default-k8s-diff-port-361270 kubelet[733]: I1213 09:12:33.848146     733 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 13 09:12:33 default-k8s-diff-port-361270 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 09:12:33 default-k8s-diff-port-361270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:12:33 default-k8s-diff-port-361270 systemd[1]: kubelet.service: Consumed 1.673s CPU time.
	
	
	==> kubernetes-dashboard [5453ca1cc46c20adc190b658c4c3524bf8d7f1cb6177172bb2f6ee3054a7dfb7] <==
	2025/12/13 09:11:55 Using namespace: kubernetes-dashboard
	2025/12/13 09:11:55 Using in-cluster config to connect to apiserver
	2025/12/13 09:11:55 Using secret token for csrf signing
	2025/12/13 09:11:55 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 09:11:55 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 09:11:55 Successful initial request to the apiserver, version: v1.34.2
	2025/12/13 09:11:55 Generating JWE encryption key
	2025/12/13 09:11:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 09:11:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 09:11:55 Initializing JWE encryption key from synchronized object
	2025/12/13 09:11:55 Creating in-cluster Sidecar client
	2025/12/13 09:11:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 09:11:55 Serving insecurely on HTTP port: 9090
	2025/12/13 09:12:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 09:11:55 Starting overwatch
	
	
	==> storage-provisioner [3a6ccc8828213f06a483a07664d5f51e72982e311edfd72fc6bcd8fbd8700f7e] <==
	I1213 09:12:17.805042       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:12:17.813217       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:12:17.813274       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 09:12:17.815614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:12:21.270871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:12:25.531649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:12:29.130012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:12:32.184647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:12:35.206657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:12:35.210978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:12:35.211107       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 09:12:35.211199       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"008c8b69-db9b-496b-ba4e-78cdc6236358", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-361270_648cf29a-7593-4004-8d99-1e69c0f33e8a became leader
	I1213 09:12:35.211278       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-361270_648cf29a-7593-4004-8d99-1e69c0f33e8a!
	W1213 09:12:35.213059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:12:35.216894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:12:35.311604       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-361270_648cf29a-7593-4004-8d99-1e69c0f33e8a!
	
	
	==> storage-provisioner [8f8bf95e6ad87c53d72320ca46c7c701c44a0050543308bd67d34281350550ec] <==
	I1213 09:11:46.996984       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 09:12:17.000917       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-361270 -n default-k8s-diff-port-361270
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-361270 -n default-k8s-diff-port-361270: exit status 2 (317.503507ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-361270 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-361270
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-361270:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122",
	        "Created": "2025-12-13T09:10:34.393520957Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 344313,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-13T09:11:37.31386389Z",
	            "FinishedAt": "2025-12-13T09:11:36.389776801Z"
	        },
	        "Image": "sha256:5ece92cc37359bacec97d75171c7b54eb5669d0b3aa1fe3e08b778d0db5c0ebd",
	        "ResolvConfPath": "/var/lib/docker/containers/33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122/hostname",
	        "HostsPath": "/var/lib/docker/containers/33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122/hosts",
	        "LogPath": "/var/lib/docker/containers/33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122/33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122-json.log",
	        "Name": "/default-k8s-diff-port-361270",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-361270:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-361270",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "33e3412677ddd02e4ae7dc9d5102dc50d9ba6410bfa0ba2d73b67433a7f88122",
	                "LowerDir": "/var/lib/docker/overlay2/eaeb52f2095d7e5f8986a69d2edbe8afe0a205bb9fc051803008936187282ad8-init/diff:/var/lib/docker/overlay2/24c06c12b495f32a87c4def53082d6a0423f8d29357ce41741756ceaaa008578/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eaeb52f2095d7e5f8986a69d2edbe8afe0a205bb9fc051803008936187282ad8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eaeb52f2095d7e5f8986a69d2edbe8afe0a205bb9fc051803008936187282ad8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eaeb52f2095d7e5f8986a69d2edbe8afe0a205bb9fc051803008936187282ad8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-361270",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-361270/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-361270",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-361270",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-361270",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "809d7190a787208f65da73870e92f7900386cf427b14f5af234f66cf278068ac",
	            "SandboxKey": "/var/run/docker/netns/809d7190a787",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-361270": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6564b4ae9b49d7064dcdd83dbabbc1dc234669ef37d48771177caa6ad8786081",
	                    "EndpointID": "9815a3c563edd9c12d398197959dbf52ebcd4d03df51f8d5a5029860ed49631b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "3a:13:9e:14:02:81",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-361270",
	                        "33e3412677dd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-361270 -n default-k8s-diff-port-361270
E1213 09:12:37.714447    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/calico-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-361270 -n default-k8s-diff-port-361270: exit status 2 (332.433766ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-361270 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-361270 logs -n 25: (1.045915487s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬───────
──────────────┐
	│ COMMAND │                                                                                                                         ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼───────
──────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-361270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                   │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ image   │ old-k8s-version-234538 image list --format=json                                                                                                                                                                                                      │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ stop    │ -p default-k8s-diff-port-361270 --alsologtostderr -v=3                                                                                                                                                                                               │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p old-k8s-version-234538 --alsologtostderr -v=1                                                                                                                                                                                                     │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ delete  │ -p no-preload-291522                                                                                                                                                                                                                                 │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p no-preload-291522                                                                                                                                                                                                                                 │ no-preload-291522            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p newest-cni-966117 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p old-k8s-version-234538                                                                                                                                                                                                                            │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ delete  │ -p old-k8s-version-234538                                                                                                                                                                                                                            │ old-k8s-version-234538       │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-361270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                              │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:12 UTC │
	│ addons  │ enable metrics-server -p newest-cni-966117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                              │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ stop    │ -p newest-cni-966117 --alsologtostderr -v=3                                                                                                                                                                                                          │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ image   │ embed-certs-379362 image list --format=json                                                                                                                                                                                                          │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ pause   │ -p embed-certs-379362 --alsologtostderr -v=1                                                                                                                                                                                                         │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-966117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                         │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:11 UTC │
	│ start   │ -p newest-cni-966117 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:11 UTC │ 13 Dec 25 09:12 UTC │
	│ delete  │ -p embed-certs-379362                                                                                                                                                                                                                                │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ delete  │ -p embed-certs-379362                                                                                                                                                                                                                                │ embed-certs-379362           │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ image   │ newest-cni-966117 image list --format=json                                                                                                                                                                                                           │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ pause   │ -p newest-cni-966117 --alsologtostderr -v=1                                                                                                                                                                                                          │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │                     │
	│ delete  │ -p newest-cni-966117                                                                                                                                                                                                                                 │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ delete  │ -p newest-cni-966117                                                                                                                                                                                                                                 │ newest-cni-966117            │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ image   │ default-k8s-diff-port-361270 image list --format=json                                                                                                                                                                                                │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │ 13 Dec 25 09:12 UTC │
	│ pause   │ -p default-k8s-diff-port-361270 --alsologtostderr -v=1                                                                                                                                                                                               │ default-k8s-diff-port-361270 │ jenkins │ v1.37.0 │ 13 Dec 25 09:12 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴───────
──────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 09:11:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 09:11:59.511350  348846 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:11:59.511449  348846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:59.511460  348846 out.go:374] Setting ErrFile to fd 2...
	I1213 09:11:59.511466  348846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:11:59.511676  348846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:11:59.512155  348846 out.go:368] Setting JSON to false
	I1213 09:11:59.513404  348846 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3271,"bootTime":1765613848,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:11:59.513457  348846 start.go:143] virtualization: kvm guest
	I1213 09:11:59.515473  348846 out.go:179] * [newest-cni-966117] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:11:59.516692  348846 notify.go:221] Checking for updates...
	I1213 09:11:59.516718  348846 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:11:59.518077  348846 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:11:59.519243  348846 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:11:59.520461  348846 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:11:59.521788  348846 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:11:59.523074  348846 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:11:59.524842  348846 config.go:182] Loaded profile config "newest-cni-966117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:11:59.525633  348846 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:11:59.549908  348846 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 09:11:59.550053  348846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:59.608860  348846 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-13 09:11:59.5995165 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:59.608976  348846 docker.go:319] overlay module found
	I1213 09:11:59.610766  348846 out.go:179] * Using the docker driver based on existing profile
	I1213 09:11:59.611993  348846 start.go:309] selected driver: docker
	I1213 09:11:59.612013  348846 start.go:927] validating driver "docker" against &{Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:59.612124  348846 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:11:59.612924  348846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:11:59.671889  348846 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-12-13 09:11:59.660935388 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:11:59.672219  348846 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 09:11:59.672248  348846 cni.go:84] Creating CNI manager for ""
	I1213 09:11:59.672318  348846 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:11:59.672376  348846 start.go:353] cluster config:
	{Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:11:59.674150  348846 out.go:179] * Starting "newest-cni-966117" primary control-plane node in "newest-cni-966117" cluster
	I1213 09:11:59.675254  348846 cache.go:134] Beginning downloading kic base image for docker with crio
	I1213 09:11:59.676366  348846 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
	I1213 09:11:59.677312  348846 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 09:11:59.677346  348846 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1213 09:11:59.677357  348846 cache.go:65] Caching tarball of preloaded images
	I1213 09:11:59.677391  348846 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
	I1213 09:11:59.677456  348846 preload.go:238] Found /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1213 09:11:59.677470  348846 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1213 09:11:59.677574  348846 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/config.json ...
	I1213 09:11:59.697910  348846 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
	I1213 09:11:59.697929  348846 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
	I1213 09:11:59.697958  348846 cache.go:243] Successfully downloaded all kic artifacts
	I1213 09:11:59.697996  348846 start.go:360] acquireMachinesLock for newest-cni-966117: {Name:mk2b636d64beae36e9b4be83e39d6514423d9194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 09:11:59.698084  348846 start.go:364] duration metric: took 46.374µs to acquireMachinesLock for "newest-cni-966117"
	I1213 09:11:59.698109  348846 start.go:96] Skipping create...Using existing machine configuration
	I1213 09:11:59.698117  348846 fix.go:54] fixHost starting: 
	I1213 09:11:59.698377  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:11:59.716186  348846 fix.go:112] recreateIfNeeded on newest-cni-966117: state=Stopped err=<nil>
	W1213 09:11:59.716211  348846 fix.go:138] unexpected machine state, will restart: <nil>
	W1213 09:11:58.872086  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:00.872161  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	I1213 09:11:59.717723  348846 out.go:252] * Restarting existing docker container for "newest-cni-966117" ...
	I1213 09:11:59.717793  348846 cli_runner.go:164] Run: docker start newest-cni-966117
	I1213 09:11:59.987095  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:00.008413  348846 kic.go:430] container "newest-cni-966117" state is running.
	I1213 09:12:00.008872  348846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-966117
	I1213 09:12:00.029442  348846 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/config.json ...
	I1213 09:12:00.029747  348846 machine.go:94] provisionDockerMachine start ...
	I1213 09:12:00.029825  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:00.049967  348846 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:00.050320  348846 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1213 09:12:00.050338  348846 main.go:143] libmachine: About to run SSH command:
	hostname
	I1213 09:12:00.050937  348846 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42964->127.0.0.1:33138: read: connection reset by peer
	I1213 09:12:03.188177  348846 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-966117
	
	I1213 09:12:03.188220  348846 ubuntu.go:182] provisioning hostname "newest-cni-966117"
	I1213 09:12:03.188304  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:03.208635  348846 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:03.208982  348846 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1213 09:12:03.209009  348846 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-966117 && echo "newest-cni-966117" | sudo tee /etc/hostname
	I1213 09:12:03.356451  348846 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-966117
	
	I1213 09:12:03.356550  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:03.377602  348846 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:03.377902  348846 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1213 09:12:03.377928  348846 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-966117' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-966117/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-966117' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 09:12:03.515384  348846 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1213 09:12:03.515414  348846 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-5776/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-5776/.minikube}
	I1213 09:12:03.515443  348846 ubuntu.go:190] setting up certificates
	I1213 09:12:03.515457  348846 provision.go:84] configureAuth start
	I1213 09:12:03.515533  348846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-966117
	I1213 09:12:03.535934  348846 provision.go:143] copyHostCerts
	I1213 09:12:03.536012  348846 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem, removing ...
	I1213 09:12:03.536028  348846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem
	I1213 09:12:03.536096  348846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/key.pem (1675 bytes)
	I1213 09:12:03.536187  348846 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem, removing ...
	I1213 09:12:03.536195  348846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem
	I1213 09:12:03.536232  348846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/ca.pem (1082 bytes)
	I1213 09:12:03.536293  348846 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem, removing ...
	I1213 09:12:03.536301  348846 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem
	I1213 09:12:03.536324  348846 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-5776/.minikube/cert.pem (1123 bytes)
	I1213 09:12:03.536386  348846 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem org=jenkins.newest-cni-966117 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-966117]
	I1213 09:12:03.747763  348846 provision.go:177] copyRemoteCerts
	I1213 09:12:03.747825  348846 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 09:12:03.747884  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:03.768773  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:03.867273  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1213 09:12:03.886803  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1213 09:12:03.905579  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 09:12:03.923712  348846 provision.go:87] duration metric: took 408.231151ms to configureAuth
	I1213 09:12:03.923746  348846 ubuntu.go:206] setting minikube options for container-runtime
	I1213 09:12:03.923916  348846 config.go:182] Loaded profile config "newest-cni-966117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:12:03.924009  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:03.944125  348846 main.go:143] libmachine: Using SSH client type: native
	I1213 09:12:03.944478  348846 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1213 09:12:03.944524  348846 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 09:12:04.251417  348846 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 09:12:04.251442  348846 machine.go:97] duration metric: took 4.221675747s to provisionDockerMachine
	I1213 09:12:04.251456  348846 start.go:293] postStartSetup for "newest-cni-966117" (driver="docker")
	I1213 09:12:04.251472  348846 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 09:12:04.251566  348846 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 09:12:04.251603  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:04.271923  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:04.377174  348846 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 09:12:04.380783  348846 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 09:12:04.380806  348846 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1213 09:12:04.380816  348846 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/addons for local assets ...
	I1213 09:12:04.380867  348846 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-5776/.minikube/files for local assets ...
	I1213 09:12:04.380942  348846 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem -> 93032.pem in /etc/ssl/certs
	I1213 09:12:04.381032  348846 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1213 09:12:04.388870  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:12:04.406744  348846 start.go:296] duration metric: took 155.274167ms for postStartSetup
	I1213 09:12:04.406824  348846 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 09:12:04.406859  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:04.425060  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:04.519117  348846 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 09:12:04.523740  348846 fix.go:56] duration metric: took 4.825619979s for fixHost
	I1213 09:12:04.523761  348846 start.go:83] releasing machines lock for "newest-cni-966117", held for 4.825662551s
	I1213 09:12:04.523813  348846 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-966117
	I1213 09:12:04.542972  348846 ssh_runner.go:195] Run: cat /version.json
	I1213 09:12:04.543037  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:04.543070  348846 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 09:12:04.543152  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:04.562091  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:04.562364  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:04.707160  348846 ssh_runner.go:195] Run: systemctl --version
	I1213 09:12:04.714445  348846 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 09:12:04.750084  348846 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1213 09:12:04.755144  348846 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1213 09:12:04.755236  348846 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 09:12:04.763878  348846 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1213 09:12:04.763929  348846 start.go:496] detecting cgroup driver to use...
	I1213 09:12:04.763964  348846 detect.go:190] detected "systemd" cgroup driver on host os
	I1213 09:12:04.764013  348846 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 09:12:04.778097  348846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 09:12:04.790942  348846 docker.go:218] disabling cri-docker service (if available) ...
	I1213 09:12:04.790991  348846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 09:12:04.805770  348846 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 09:12:04.818577  348846 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 09:12:04.898219  348846 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 09:12:04.978617  348846 docker.go:234] disabling docker service ...
	I1213 09:12:04.978680  348846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 09:12:04.992928  348846 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 09:12:05.005978  348846 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 09:12:05.088758  348846 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 09:12:05.171196  348846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 09:12:05.183599  348846 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 09:12:05.197833  348846 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1213 09:12:05.197897  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.206562  348846 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1213 09:12:05.206647  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.215907  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.224628  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.232991  348846 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 09:12:05.240720  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.249141  348846 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.257427  348846 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 09:12:05.265929  348846 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 09:12:05.273133  348846 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 09:12:05.281944  348846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:12:05.385356  348846 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 09:12:05.520929  348846 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 09:12:05.521001  348846 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 09:12:05.526028  348846 start.go:564] Will wait 60s for crictl version
	I1213 09:12:05.526097  348846 ssh_runner.go:195] Run: which crictl
	I1213 09:12:05.529805  348846 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1213 09:12:05.555375  348846 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1213 09:12:05.555477  348846 ssh_runner.go:195] Run: crio --version
	I1213 09:12:05.584114  348846 ssh_runner.go:195] Run: crio --version
	I1213 09:12:05.615327  348846 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1213 09:12:05.616457  348846 cli_runner.go:164] Run: docker network inspect newest-cni-966117 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 09:12:05.635292  348846 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1213 09:12:05.639617  348846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:12:05.651081  348846 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1213 09:12:05.652314  348846 kubeadm.go:884] updating cluster {Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 09:12:05.652516  348846 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1213 09:12:05.652581  348846 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:12:05.687546  348846 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:12:05.687577  348846 crio.go:433] Images already preloaded, skipping extraction
	I1213 09:12:05.687628  348846 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 09:12:05.715637  348846 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 09:12:05.715657  348846 cache_images.go:86] Images are preloaded, skipping loading
	I1213 09:12:05.715664  348846 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 crio true true} ...
	I1213 09:12:05.715759  348846 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-966117 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 09:12:05.715822  348846 ssh_runner.go:195] Run: crio config
	I1213 09:12:05.770581  348846 cni.go:84] Creating CNI manager for ""
	I1213 09:12:05.770606  348846 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 09:12:05.770621  348846 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1213 09:12:05.770642  348846 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-966117 NodeName:newest-cni-966117 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 09:12:05.770778  348846 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-966117"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 09:12:05.770848  348846 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1213 09:12:05.779678  348846 binaries.go:51] Found k8s binaries, skipping transfer
	I1213 09:12:05.779739  348846 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 09:12:05.788501  348846 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1213 09:12:05.802841  348846 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1213 09:12:05.817454  348846 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1213 09:12:05.830458  348846 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1213 09:12:05.834197  348846 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 09:12:05.845034  348846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:12:05.926720  348846 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:12:05.997523  348846 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117 for IP: 192.168.94.2
	I1213 09:12:05.997546  348846 certs.go:195] generating shared ca certs ...
	I1213 09:12:05.997566  348846 certs.go:227] acquiring lock for ca certs: {Name:mk80892d6e61f0acf7b97550b52b3341a040901c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:05.997713  348846 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key
	I1213 09:12:05.997768  348846 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key
	I1213 09:12:05.997783  348846 certs.go:257] generating profile certs ...
	I1213 09:12:05.997915  348846 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/client.key
	I1213 09:12:05.998006  348846 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/apiserver.key.4ee2f72f
	I1213 09:12:05.998061  348846 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/proxy-client.key
	I1213 09:12:05.998197  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem (1338 bytes)
	W1213 09:12:05.998243  348846 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303_empty.pem, impossibly tiny 0 bytes
	I1213 09:12:05.998258  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 09:12:05.998299  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/ca.pem (1082 bytes)
	I1213 09:12:05.998335  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/cert.pem (1123 bytes)
	I1213 09:12:05.998375  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/certs/key.pem (1675 bytes)
	I1213 09:12:05.998435  348846 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem (1708 bytes)
	I1213 09:12:05.999149  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 09:12:06.019574  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 09:12:06.039769  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 09:12:06.061044  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1213 09:12:06.086891  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1213 09:12:06.112254  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1213 09:12:06.130707  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 09:12:06.149223  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/newest-cni-966117/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1213 09:12:06.166960  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 09:12:06.184981  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/certs/9303.pem --> /usr/share/ca-certificates/9303.pem (1338 bytes)
	I1213 09:12:06.204120  348846 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/ssl/certs/93032.pem --> /usr/share/ca-certificates/93032.pem (1708 bytes)
	I1213 09:12:06.224026  348846 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 09:12:06.237075  348846 ssh_runner.go:195] Run: openssl version
	I1213 09:12:06.244173  348846 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:06.252708  348846 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1213 09:12:06.260879  348846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:06.265095  348846 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:06.265166  348846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 09:12:06.301161  348846 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1213 09:12:06.309231  348846 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9303.pem
	I1213 09:12:06.316876  348846 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9303.pem /etc/ssl/certs/9303.pem
	I1213 09:12:06.324648  348846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9303.pem
	I1213 09:12:06.328583  348846 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:37 /usr/share/ca-certificates/9303.pem
	I1213 09:12:06.328649  348846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9303.pem
	I1213 09:12:06.363986  348846 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1213 09:12:06.373146  348846 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/93032.pem
	I1213 09:12:06.380694  348846 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/93032.pem /etc/ssl/certs/93032.pem
	I1213 09:12:06.388858  348846 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93032.pem
	I1213 09:12:06.392640  348846 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:37 /usr/share/ca-certificates/93032.pem
	I1213 09:12:06.392699  348846 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93032.pem
	I1213 09:12:06.427810  348846 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1213 09:12:06.435920  348846 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 09:12:06.440255  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1213 09:12:06.477879  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1213 09:12:06.517466  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1213 09:12:06.566357  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1213 09:12:06.614264  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1213 09:12:06.667715  348846 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1213 09:12:06.721234  348846 kubeadm.go:401] StartCluster: {Name:newest-cni-966117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-966117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 09:12:06.721340  348846 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 09:12:06.721412  348846 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 09:12:06.758871  348846 cri.go:89] found id: "5b1856a3a07129909b63b61002c6a406d40ab25115690133ad9907c9af301d4e"
	I1213 09:12:06.758895  348846 cri.go:89] found id: "fac698cd1af50b220bc1f2a9b252b26dd2966e87440e25994d1c645cbd7820ff"
	I1213 09:12:06.758901  348846 cri.go:89] found id: "8807f33081db2b27421f17eee364e12fc581fe40c63b1e2f13e70468891cab09"
	I1213 09:12:06.758906  348846 cri.go:89] found id: "0345d6de3446b527dcd60a7b59c72bf14dad6b1213e3c592d7f413738cf10d19"
	I1213 09:12:06.758910  348846 cri.go:89] found id: ""
	I1213 09:12:06.758964  348846 ssh_runner.go:195] Run: sudo runc list -f json
	W1213 09:12:06.773080  348846 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T09:12:06Z" level=error msg="open /run/runc: no such file or directory"
	I1213 09:12:06.773166  348846 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 09:12:06.784547  348846 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1213 09:12:06.784577  348846 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1213 09:12:06.784628  348846 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1213 09:12:06.795795  348846 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1213 09:12:06.796533  348846 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-966117" does not appear in /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:12:06.796828  348846 kubeconfig.go:62] /home/jenkins/minikube-integration/22128-5776/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-966117" cluster setting kubeconfig missing "newest-cni-966117" context setting]
	I1213 09:12:06.797449  348846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:06.800252  348846 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1213 09:12:06.810290  348846 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1213 09:12:06.810327  348846 kubeadm.go:602] duration metric: took 25.742497ms to restartPrimaryControlPlane
	I1213 09:12:06.810339  348846 kubeadm.go:403] duration metric: took 89.114693ms to StartCluster
	I1213 09:12:06.810357  348846 settings.go:142] acquiring lock: {Name:mkb7db33d439ccf76620dab7051026432361ebb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:06.810417  348846 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:12:06.811517  348846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-5776/kubeconfig: {Name:mka834f45ebc232af0d413aa010ee0e3622f70b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 09:12:06.811783  348846 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 09:12:06.811972  348846 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1213 09:12:06.812098  348846 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-966117"
	I1213 09:12:06.812122  348846 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-966117"
	I1213 09:12:06.812123  348846 config.go:182] Loaded profile config "newest-cni-966117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 09:12:06.812116  348846 addons.go:70] Setting dashboard=true in profile "newest-cni-966117"
	I1213 09:12:06.812148  348846 addons.go:239] Setting addon dashboard=true in "newest-cni-966117"
	I1213 09:12:06.812140  348846 addons.go:70] Setting default-storageclass=true in profile "newest-cni-966117"
	W1213 09:12:06.812157  348846 addons.go:248] addon dashboard should already be in state true
	I1213 09:12:06.812169  348846 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-966117"
	I1213 09:12:06.812199  348846 host.go:66] Checking if "newest-cni-966117" exists ...
	W1213 09:12:06.812131  348846 addons.go:248] addon storage-provisioner should already be in state true
	I1213 09:12:06.812253  348846 host.go:66] Checking if "newest-cni-966117" exists ...
	I1213 09:12:06.812524  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:06.812689  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:06.812745  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:06.816903  348846 out.go:179] * Verifying Kubernetes components...
	I1213 09:12:06.819064  348846 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 09:12:06.840702  348846 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1213 09:12:06.842378  348846 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1213 09:12:06.842618  348846 addons.go:239] Setting addon default-storageclass=true in "newest-cni-966117"
	W1213 09:12:06.842633  348846 addons.go:248] addon default-storageclass should already be in state true
	I1213 09:12:06.842661  348846 host.go:66] Checking if "newest-cni-966117" exists ...
	I1213 09:12:06.843100  348846 cli_runner.go:164] Run: docker container inspect newest-cni-966117 --format={{.State.Status}}
	I1213 09:12:06.845434  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1213 09:12:06.845458  348846 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1213 09:12:06.845538  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:06.848056  348846 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1213 09:12:02.872197  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:05.371940  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	I1213 09:12:06.852360  348846 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:12:06.852383  348846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 09:12:06.852438  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:06.883177  348846 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 09:12:06.883209  348846 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 09:12:06.883277  348846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-966117
	I1213 09:12:06.884226  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:06.897411  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:06.909108  348846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/newest-cni-966117/id_rsa Username:docker}
	I1213 09:12:06.978460  348846 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 09:12:06.993222  348846 api_server.go:52] waiting for apiserver process to appear ...
	I1213 09:12:06.993297  348846 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 09:12:06.997850  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1213 09:12:06.997871  348846 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1213 09:12:07.007126  348846 api_server.go:72] duration metric: took 195.312265ms to wait for apiserver process to appear ...
	I1213 09:12:07.007154  348846 api_server.go:88] waiting for apiserver healthz status ...
	I1213 09:12:07.007175  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:07.011914  348846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 09:12:07.014555  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1213 09:12:07.014577  348846 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1213 09:12:07.019771  348846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 09:12:07.029222  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1213 09:12:07.029247  348846 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1213 09:12:07.046837  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1213 09:12:07.046861  348846 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1213 09:12:07.062365  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1213 09:12:07.062392  348846 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1213 09:12:07.078224  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1213 09:12:07.078262  348846 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1213 09:12:07.092750  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1213 09:12:07.092770  348846 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1213 09:12:07.110234  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1213 09:12:07.110262  348846 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1213 09:12:07.124880  348846 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:12:07.124902  348846 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1213 09:12:07.140572  348846 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1213 09:12:07.991252  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 09:12:07.991294  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 09:12:07.991314  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:08.004904  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 09:12:08.004937  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 09:12:08.008203  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:08.021272  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1213 09:12:08.021307  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1213 09:12:08.507813  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:08.512905  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:12:08.512932  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:12:08.545622  348846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.525818473s)
	I1213 09:12:08.545637  348846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.533695672s)
	I1213 09:12:08.545757  348846 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.40514658s)
	I1213 09:12:08.547413  348846 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-966117 addons enable metrics-server
	
	I1213 09:12:08.557103  348846 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1213 09:12:08.558451  348846 addons.go:530] duration metric: took 1.746499358s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1213 09:12:09.007392  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:09.011715  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1213 09:12:09.011744  348846 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1213 09:12:09.507347  348846 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1213 09:12:09.511631  348846 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1213 09:12:09.513046  348846 api_server.go:141] control plane version: v1.35.0-beta.0
	I1213 09:12:09.513079  348846 api_server.go:131] duration metric: took 2.505917364s to wait for apiserver health ...
	I1213 09:12:09.513097  348846 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 09:12:09.516862  348846 system_pods.go:59] 8 kube-system pods found
	I1213 09:12:09.516915  348846 system_pods.go:61] "coredns-7d764666f9-sk2nl" [37f2d8b3-7ed6-4e82-9143-7d913b7b5f77] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 09:12:09.516936  348846 system_pods.go:61] "etcd-newest-cni-966117" [d5f60407-9ff1-41b0-8842-112a9d4e4db9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1213 09:12:09.516944  348846 system_pods.go:61] "kindnet-4ccdw" [e37a84fb-6bb4-46c9-abd8-7faff492b11f] Running
	I1213 09:12:09.516951  348846 system_pods.go:61] "kube-apiserver-newest-cni-966117" [ca4879bf-a328-40f8-bd80-067ce393ba2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1213 09:12:09.516956  348846 system_pods.go:61] "kube-controller-manager-newest-cni-966117" [384bdaff-8ec0-437d-b7b2-9186a3d77d5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1213 09:12:09.516961  348846 system_pods.go:61] "kube-proxy-lnm62" [38b74d8a-68b4-4816-bec2-fad7da0471f8] Running
	I1213 09:12:09.516966  348846 system_pods.go:61] "kube-scheduler-newest-cni-966117" [16be3154-0cd9-494f-bdbf-d41819d2c1fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1213 09:12:09.516975  348846 system_pods.go:61] "storage-provisioner" [31d3def0-8e7d-4759-a1b9-0fad99271611] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1213 09:12:09.516980  348846 system_pods.go:74] duration metric: took 3.876843ms to wait for pod list to return data ...
	I1213 09:12:09.516989  348846 default_sa.go:34] waiting for default service account to be created ...
	I1213 09:12:09.519603  348846 default_sa.go:45] found service account: "default"
	I1213 09:12:09.519627  348846 default_sa.go:55] duration metric: took 2.631674ms for default service account to be created ...
	I1213 09:12:09.519643  348846 kubeadm.go:587] duration metric: took 2.707831782s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1213 09:12:09.519662  348846 node_conditions.go:102] verifying NodePressure condition ...
	I1213 09:12:09.522032  348846 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1213 09:12:09.522053  348846 node_conditions.go:123] node cpu capacity is 8
	I1213 09:12:09.522067  348846 node_conditions.go:105] duration metric: took 2.401048ms to run NodePressure ...
	I1213 09:12:09.522078  348846 start.go:242] waiting for startup goroutines ...
	I1213 09:12:09.522084  348846 start.go:247] waiting for cluster config update ...
	I1213 09:12:09.522094  348846 start.go:256] writing updated cluster config ...
	I1213 09:12:09.522385  348846 ssh_runner.go:195] Run: rm -f paused
	I1213 09:12:09.569110  348846 start.go:625] kubectl: 1.34.3, cluster: 1.35.0-beta.0 (minor skew: 1)
	I1213 09:12:09.570864  348846 out.go:179] * Done! kubectl is now configured to use "newest-cni-966117" cluster and "default" namespace by default
	W1213 09:12:07.870810  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:09.873311  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:12.373184  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:14.871194  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:16.872364  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	W1213 09:12:19.370156  344087 pod_ready.go:104] pod "coredns-66bc5c9577-xhjmn" is not "Ready", error: <nil>
	I1213 09:12:20.370628  344087 pod_ready.go:94] pod "coredns-66bc5c9577-xhjmn" is "Ready"
	I1213 09:12:20.370655  344087 pod_ready.go:86] duration metric: took 33.00509608s for pod "coredns-66bc5c9577-xhjmn" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:20.372949  344087 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:20.376301  344087 pod_ready.go:94] pod "etcd-default-k8s-diff-port-361270" is "Ready"
	I1213 09:12:20.376318  344087 pod_ready.go:86] duration metric: took 3.345709ms for pod "etcd-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:20.378179  344087 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:20.394829  344087 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-361270" is "Ready"
	I1213 09:12:20.394858  344087 pod_ready.go:86] duration metric: took 16.650618ms for pod "kube-apiserver-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:20.398454  344087 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:20.569256  344087 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-361270" is "Ready"
	I1213 09:12:20.569282  344087 pod_ready.go:86] duration metric: took 170.8099ms for pod "kube-controller-manager-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:20.769393  344087 pod_ready.go:83] waiting for pod "kube-proxy-78nr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:21.169597  344087 pod_ready.go:94] pod "kube-proxy-78nr2" is "Ready"
	I1213 09:12:21.169627  344087 pod_ready.go:86] duration metric: took 400.213054ms for pod "kube-proxy-78nr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:21.368623  344087 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:21.768635  344087 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-361270" is "Ready"
	I1213 09:12:21.768661  344087 pod_ready.go:86] duration metric: took 400.016263ms for pod "kube-scheduler-default-k8s-diff-port-361270" in "kube-system" namespace to be "Ready" or be gone ...
	I1213 09:12:21.768672  344087 pod_ready.go:40] duration metric: took 34.406964078s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1213 09:12:21.813431  344087 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1213 09:12:21.815222  344087 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-361270" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 09:11:57 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:11:57.472115031Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 13 09:11:57 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:11:57.47526199Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 13 09:11:57 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:11:57.475280853Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.624279699Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5774705f-b188-4dd3-9b7e-1c741e5b3bc5 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.627605938Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7231714b-fc55-4602-a3ef-2e0e9ff3160d name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.63087266Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9/dashboard-metrics-scraper" id=8bb3c4bb-11c2-415d-aad8-13be2a74a992 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.631021188Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.638347613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.638822518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.668449806Z" level=info msg="Created container 6b22b815fcc62c2eeec5b1f94f879887f46d8ba6f261b607594c09045b3f8e69: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9/dashboard-metrics-scraper" id=8bb3c4bb-11c2-415d-aad8-13be2a74a992 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.669141406Z" level=info msg="Starting container: 6b22b815fcc62c2eeec5b1f94f879887f46d8ba6f261b607594c09045b3f8e69" id=eb6a1f09-88b3-493b-991a-c604a568f8f5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.671430473Z" level=info msg="Started container" PID=1765 containerID=6b22b815fcc62c2eeec5b1f94f879887f46d8ba6f261b607594c09045b3f8e69 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9/dashboard-metrics-scraper id=eb6a1f09-88b3-493b-991a-c604a568f8f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e56a5b9e6025b719bf5ea35f1852469d5adf047dbd51ce43ebab0af9fb1471ff
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.728986504Z" level=info msg="Removing container: b05849820837388400707505c12256efe4ffd285d208950577e844a10a2d2317" id=e31b21da-3f24-4f91-8797-6bfe75226f14 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:12:09 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:09.739169941Z" level=info msg="Removed container b05849820837388400707505c12256efe4ffd285d208950577e844a10a2d2317: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9/dashboard-metrics-scraper" id=e31b21da-3f24-4f91-8797-6bfe75226f14 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.750315015Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ffec7155-5ab4-4e30-b04b-3f3f262c0ef0 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.751304399Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=de1ac95e-d83e-48b6-a211-6229860f3b1e name=/runtime.v1.ImageService/ImageStatus
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.752443538Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=0eaa9e49-1a12-473c-af8d-87f40e1a5597 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.752640063Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.756961348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.757156826Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1b1299fa65af51b4da5f0dd1b99fdb55877f8662a95c67cba6893f235488d069/merged/etc/passwd: no such file or directory"
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.757191674Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1b1299fa65af51b4da5f0dd1b99fdb55877f8662a95c67cba6893f235488d069/merged/etc/group: no such file or directory"
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.757558612Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.789707886Z" level=info msg="Created container 3a6ccc8828213f06a483a07664d5f51e72982e311edfd72fc6bcd8fbd8700f7e: kube-system/storage-provisioner/storage-provisioner" id=0eaa9e49-1a12-473c-af8d-87f40e1a5597 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.790339073Z" level=info msg="Starting container: 3a6ccc8828213f06a483a07664d5f51e72982e311edfd72fc6bcd8fbd8700f7e" id=c61958d6-d371-4c5e-a079-90afab7007e8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 09:12:17 default-k8s-diff-port-361270 crio[567]: time="2025-12-13T09:12:17.792080448Z" level=info msg="Started container" PID=1779 containerID=3a6ccc8828213f06a483a07664d5f51e72982e311edfd72fc6bcd8fbd8700f7e description=kube-system/storage-provisioner/storage-provisioner id=c61958d6-d371-4c5e-a079-90afab7007e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2891790bb3f3da9d453bca277f59ea9144a95900f01103983e500236b84f1c01
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	3a6ccc8828213       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   2891790bb3f3d       storage-provisioner                                    kube-system
	6b22b815fcc62       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   e56a5b9e6025b       dashboard-metrics-scraper-6ffb444bf9-zbfg9             kubernetes-dashboard
	5453ca1cc46c2       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   96a9bccb5c573       kubernetes-dashboard-855c9754f9-ww2pb                  kubernetes-dashboard
	f395b3df7c1cc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   feae305ccac4c       coredns-66bc5c9577-xhjmn                               kube-system
	731745ea7066b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   f31f7a886109f       busybox                                                default
	147013bf941aa       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   4aa49ddf9df13       kindnet-g6h8g                                          kube-system
	644b03e3b5bd5       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                           51 seconds ago      Running             kube-proxy                  0                   fae728b6ea51f       kube-proxy-78nr2                                       kube-system
	8f8bf95e6ad87       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   2891790bb3f3d       storage-provisioner                                    kube-system
	d2c1b6b0bb4e9       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                           54 seconds ago      Running             kube-controller-manager     0                   d2fea5f8d5902       kube-controller-manager-default-k8s-diff-port-361270   kube-system
	12825df66baea       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                           54 seconds ago      Running             kube-apiserver              0                   b2cc705fa3482       kube-apiserver-default-k8s-diff-port-361270            kube-system
	173e64f97cc32       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                           54 seconds ago      Running             kube-scheduler              0                   a0cd64e66f35d       kube-scheduler-default-k8s-diff-port-361270            kube-system
	1fa5b689652f2       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   a4d3ac7b061d3       etcd-default-k8s-diff-port-361270                      kube-system
	
	
	==> coredns [f395b3df7c1ccb9efcadb608e29c264a29c3bd8dd9965a2a84d56baa7a9d46c7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41381 - 15283 "HINFO IN 6362231300460410859.867155602260387096. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.437041004s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-361270
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-361270
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fb16b7642350f383695d44d1e88d7327f6f14453
	                    minikube.k8s.io/name=default-k8s-diff-port-361270
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_13T09_10_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 13 Dec 2025 09:10:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-361270
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 13 Dec 2025 09:12:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 13 Dec 2025 09:12:26 +0000   Sat, 13 Dec 2025 09:10:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 13 Dec 2025 09:12:26 +0000   Sat, 13 Dec 2025 09:10:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 13 Dec 2025 09:12:26 +0000   Sat, 13 Dec 2025 09:10:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 13 Dec 2025 09:12:26 +0000   Sat, 13 Dec 2025 09:11:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-361270
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 20812206ba1bc740098dbd916937f7d4
	  System UUID:                2dd9bf5a-9012-41ec-b7a7-58f5e5034374
	  Boot ID:                    29c80eb4-eb9c-4bb0-b6b3-20b2fdb42935
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-xhjmn                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-default-k8s-diff-port-361270                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-g6h8g                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-default-k8s-diff-port-361270             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-361270    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-78nr2                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-default-k8s-diff-port-361270             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zbfg9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ww2pb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  109s               kubelet          Node default-k8s-diff-port-361270 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s               kubelet          Node default-k8s-diff-port-361270 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s               kubelet          Node default-k8s-diff-port-361270 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s               node-controller  Node default-k8s-diff-port-361270 event: Registered Node default-k8s-diff-port-361270 in Controller
	  Normal  NodeReady                92s                kubelet          Node default-k8s-diff-port-361270 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node default-k8s-diff-port-361270 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node default-k8s-diff-port-361270 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node default-k8s-diff-port-361270 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node default-k8s-diff-port-361270 event: Registered Node default-k8s-diff-port-361270 in Controller
	
	
	==> dmesg <==
	[  +0.000027] ll header: 00000000: 46 cd d5 5c 25 ec 42 c9 5c 10 6d 67 08 00
	[Dec13 09:07] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[Dec13 09:08] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 92 30 31 19 d5 08 06
	[  +0.000699] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 37 88 97 25 dd 08 06
	[ +15.632521] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[ +20.216508] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 f7 62 38 81 f0 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 5c 2f b6 75 06 08 06
	[  +0.111008] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 c8 f7 8d 1a 5f 08 06
	[  +0.000405] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 0c 1f 9f 62 2e 08 06
	[ +12.586108] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de cc 9f d9 2d 23 08 06
	[  +0.043792] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	[Dec13 09:09] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 62 87 e9 6b d3 37 08 06
	[  +0.000424] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ce 51 ce 49 b8 70 08 06
	
	
	==> etcd [1fa5b689652f2df6d1cdd70f81cf2ca28db6a2f1cdc1b09638a4e2aac8c69c47] <==
	{"level":"warn","ts":"2025-12-13T09:11:45.104567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.114524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.121897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.129514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.137718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.145428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.154894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.162975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.171784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.184722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.192945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.201235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.209616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.216944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.224475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.232153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.238700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.245707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.252199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.259623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.267012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.287853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.295614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.302153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-13T09:11:45.350994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39304","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:12:38 up 55 min,  0 user,  load average: 1.84, 3.01, 2.28
	Linux default-k8s-diff-port-361270 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [147013bf941aad42127bb9c3fc06f64f6dcdb530987d16e48af23ce8e5c42fd6] <==
	I1213 09:11:47.260810       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1213 09:11:47.261049       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1213 09:11:47.261227       1 main.go:148] setting mtu 1500 for CNI 
	I1213 09:11:47.261244       1 main.go:178] kindnetd IP family: "ipv4"
	I1213 09:11:47.261270       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-13T09:11:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1213 09:11:47.460018       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1213 09:11:47.460049       1 controller.go:381] "Waiting for informer caches to sync"
	I1213 09:11:47.460059       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1213 09:11:47.460201       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1213 09:11:47.960706       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1213 09:11:47.960815       1 metrics.go:72] Registering metrics
	I1213 09:11:47.960905       1 controller.go:711] "Syncing nftables rules"
	I1213 09:11:57.460533       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 09:11:57.460615       1 main.go:301] handling current node
	I1213 09:12:07.459629       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 09:12:07.459665       1 main.go:301] handling current node
	I1213 09:12:17.459580       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 09:12:17.459615       1 main.go:301] handling current node
	I1213 09:12:27.459545       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 09:12:27.459628       1 main.go:301] handling current node
	I1213 09:12:37.468576       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1213 09:12:37.468612       1 main.go:301] handling current node
	
	
	==> kube-apiserver [12825df66baeab8e929d1992ff9bc015a6642f6e42c0188514ffa0a437bc96b6] <==
	I1213 09:11:45.824911       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1213 09:11:45.824923       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1213 09:11:45.824934       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1213 09:11:45.824946       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1213 09:11:45.824963       1 aggregator.go:171] initial CRD sync complete...
	I1213 09:11:45.824972       1 autoregister_controller.go:144] Starting autoregister controller
	I1213 09:11:45.824977       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1213 09:11:45.824911       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1213 09:11:45.824983       1 cache.go:39] Caches are synced for autoregister controller
	I1213 09:11:45.825035       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1213 09:11:45.825019       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1213 09:11:45.826812       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1213 09:11:45.831834       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1213 09:11:45.852822       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1213 09:11:46.110907       1 controller.go:667] quota admission added evaluator for: namespaces
	I1213 09:11:46.141163       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1213 09:11:46.183035       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1213 09:11:46.190335       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1213 09:11:46.198398       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1213 09:11:46.229833       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.113.46"}
	I1213 09:11:46.238299       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.103.84"}
	I1213 09:11:46.734460       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1213 09:11:49.511598       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1213 09:11:49.559981       1 controller.go:667] quota admission added evaluator for: endpoints
	I1213 09:11:49.660902       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d2c1b6b0bb4e9a0a4e33bae972a4b5976a7891a6b479c3ae241164f8934c8e1c] <==
	I1213 09:11:49.157288       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1213 09:11:49.157308       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1213 09:11:49.157375       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1213 09:11:49.157413       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1213 09:11:49.157608       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1213 09:11:49.157611       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1213 09:11:49.157989       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1213 09:11:49.157991       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1213 09:11:49.158290       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1213 09:11:49.162057       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:11:49.162083       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1213 09:11:49.162075       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1213 09:11:49.164290       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1213 09:11:49.164327       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1213 09:11:49.164338       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1213 09:11:49.164367       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1213 09:11:49.164377       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1213 09:11:49.164384       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1213 09:11:49.165542       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1213 09:11:49.168890       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1213 09:11:49.173164       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1213 09:11:49.174367       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1213 09:11:49.175453       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1213 09:11:49.181692       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1213 09:11:49.184966       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [644b03e3b5bd5141175cd1b667e7768c8d77c84ab6933e03e2d69cd7805a7e95] <==
	I1213 09:11:47.045164       1 server_linux.go:53] "Using iptables proxy"
	I1213 09:11:47.122331       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1213 09:11:47.222437       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1213 09:11:47.222472       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1213 09:11:47.222588       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 09:11:47.240362       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 09:11:47.240403       1 server_linux.go:132] "Using iptables Proxier"
	I1213 09:11:47.245155       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 09:11:47.245520       1 server.go:527] "Version info" version="v1.34.2"
	I1213 09:11:47.245556       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:11:47.246725       1 config.go:200] "Starting service config controller"
	I1213 09:11:47.246755       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1213 09:11:47.246780       1 config.go:106] "Starting endpoint slice config controller"
	I1213 09:11:47.246799       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1213 09:11:47.246890       1 config.go:309] "Starting node config controller"
	I1213 09:11:47.246904       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1213 09:11:47.247014       1 config.go:403] "Starting serviceCIDR config controller"
	I1213 09:11:47.247042       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1213 09:11:47.347687       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1213 09:11:47.347725       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1213 09:11:47.347734       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1213 09:11:47.347800       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [173e64f97cc32e0b4a6c94b6c29bf08fb8f903ffe154756eed2c3b98e5f27ab8] <==
	I1213 09:11:44.615248       1 serving.go:386] Generated self-signed cert in-memory
	W1213 09:11:45.740793       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1213 09:11:45.741265       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1213 09:11:45.741294       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1213 09:11:45.741394       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1213 09:11:45.780244       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1213 09:11:45.780344       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 09:11:45.783541       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:11:45.784002       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1213 09:11:45.783782       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1213 09:11:45.783760       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1213 09:11:45.884608       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 13 09:11:49 default-k8s-diff-port-361270 kubelet[733]: I1213 09:11:49.878694     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf7rr\" (UniqueName: \"kubernetes.io/projected/237a7343-83ad-4a5f-9093-de528e47ff9f-kube-api-access-lf7rr\") pod \"kubernetes-dashboard-855c9754f9-ww2pb\" (UID: \"237a7343-83ad-4a5f-9093-de528e47ff9f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ww2pb"
	Dec 13 09:11:50 default-k8s-diff-port-361270 kubelet[733]: I1213 09:11:50.205149     733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 13 09:11:52 default-k8s-diff-port-361270 kubelet[733]: I1213 09:11:52.675806     733 scope.go:117] "RemoveContainer" containerID="09d58cc242da9f787421582038fcd52bfad306c1dbe31718caeb7d93929c1564"
	Dec 13 09:11:53 default-k8s-diff-port-361270 kubelet[733]: I1213 09:11:53.680561     733 scope.go:117] "RemoveContainer" containerID="09d58cc242da9f787421582038fcd52bfad306c1dbe31718caeb7d93929c1564"
	Dec 13 09:11:53 default-k8s-diff-port-361270 kubelet[733]: I1213 09:11:53.680680     733 scope.go:117] "RemoveContainer" containerID="b05849820837388400707505c12256efe4ffd285d208950577e844a10a2d2317"
	Dec 13 09:11:53 default-k8s-diff-port-361270 kubelet[733]: E1213 09:11:53.680895     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zbfg9_kubernetes-dashboard(7109d2a7-f380-4510-a00d-3c9ead5275cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9" podUID="7109d2a7-f380-4510-a00d-3c9ead5275cc"
	Dec 13 09:11:54 default-k8s-diff-port-361270 kubelet[733]: I1213 09:11:54.683756     733 scope.go:117] "RemoveContainer" containerID="b05849820837388400707505c12256efe4ffd285d208950577e844a10a2d2317"
	Dec 13 09:11:54 default-k8s-diff-port-361270 kubelet[733]: E1213 09:11:54.683963     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zbfg9_kubernetes-dashboard(7109d2a7-f380-4510-a00d-3c9ead5275cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9" podUID="7109d2a7-f380-4510-a00d-3c9ead5275cc"
	Dec 13 09:11:55 default-k8s-diff-port-361270 kubelet[733]: I1213 09:11:55.699096     733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ww2pb" podStartSLOduration=1.620764124 podStartE2EDuration="6.699066383s" podCreationTimestamp="2025-12-13 09:11:49 +0000 UTC" firstStartedPulling="2025-12-13 09:11:50.107480169 +0000 UTC m=+6.581835028" lastFinishedPulling="2025-12-13 09:11:55.185782417 +0000 UTC m=+11.660137287" observedRunningTime="2025-12-13 09:11:55.698667443 +0000 UTC m=+12.173022322" watchObservedRunningTime="2025-12-13 09:11:55.699066383 +0000 UTC m=+12.173421261"
	Dec 13 09:11:55 default-k8s-diff-port-361270 kubelet[733]: I1213 09:11:55.942744     733 scope.go:117] "RemoveContainer" containerID="b05849820837388400707505c12256efe4ffd285d208950577e844a10a2d2317"
	Dec 13 09:11:55 default-k8s-diff-port-361270 kubelet[733]: E1213 09:11:55.942915     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zbfg9_kubernetes-dashboard(7109d2a7-f380-4510-a00d-3c9ead5275cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9" podUID="7109d2a7-f380-4510-a00d-3c9ead5275cc"
	Dec 13 09:12:09 default-k8s-diff-port-361270 kubelet[733]: I1213 09:12:09.623746     733 scope.go:117] "RemoveContainer" containerID="b05849820837388400707505c12256efe4ffd285d208950577e844a10a2d2317"
	Dec 13 09:12:09 default-k8s-diff-port-361270 kubelet[733]: I1213 09:12:09.727608     733 scope.go:117] "RemoveContainer" containerID="b05849820837388400707505c12256efe4ffd285d208950577e844a10a2d2317"
	Dec 13 09:12:09 default-k8s-diff-port-361270 kubelet[733]: I1213 09:12:09.727885     733 scope.go:117] "RemoveContainer" containerID="6b22b815fcc62c2eeec5b1f94f879887f46d8ba6f261b607594c09045b3f8e69"
	Dec 13 09:12:09 default-k8s-diff-port-361270 kubelet[733]: E1213 09:12:09.728100     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zbfg9_kubernetes-dashboard(7109d2a7-f380-4510-a00d-3c9ead5275cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9" podUID="7109d2a7-f380-4510-a00d-3c9ead5275cc"
	Dec 13 09:12:15 default-k8s-diff-port-361270 kubelet[733]: I1213 09:12:15.943073     733 scope.go:117] "RemoveContainer" containerID="6b22b815fcc62c2eeec5b1f94f879887f46d8ba6f261b607594c09045b3f8e69"
	Dec 13 09:12:15 default-k8s-diff-port-361270 kubelet[733]: E1213 09:12:15.943266     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zbfg9_kubernetes-dashboard(7109d2a7-f380-4510-a00d-3c9ead5275cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9" podUID="7109d2a7-f380-4510-a00d-3c9ead5275cc"
	Dec 13 09:12:17 default-k8s-diff-port-361270 kubelet[733]: I1213 09:12:17.749922     733 scope.go:117] "RemoveContainer" containerID="8f8bf95e6ad87c53d72320ca46c7c701c44a0050543308bd67d34281350550ec"
	Dec 13 09:12:27 default-k8s-diff-port-361270 kubelet[733]: I1213 09:12:27.623983     733 scope.go:117] "RemoveContainer" containerID="6b22b815fcc62c2eeec5b1f94f879887f46d8ba6f261b607594c09045b3f8e69"
	Dec 13 09:12:27 default-k8s-diff-port-361270 kubelet[733]: E1213 09:12:27.624181     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zbfg9_kubernetes-dashboard(7109d2a7-f380-4510-a00d-3c9ead5275cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zbfg9" podUID="7109d2a7-f380-4510-a00d-3c9ead5275cc"
	Dec 13 09:12:33 default-k8s-diff-port-361270 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 13 09:12:33 default-k8s-diff-port-361270 kubelet[733]: I1213 09:12:33.848146     733 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 13 09:12:33 default-k8s-diff-port-361270 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 13 09:12:33 default-k8s-diff-port-361270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 13 09:12:33 default-k8s-diff-port-361270 systemd[1]: kubelet.service: Consumed 1.673s CPU time.
	
	
	==> kubernetes-dashboard [5453ca1cc46c20adc190b658c4c3524bf8d7f1cb6177172bb2f6ee3054a7dfb7] <==
	2025/12/13 09:11:55 Starting overwatch
	2025/12/13 09:11:55 Using namespace: kubernetes-dashboard
	2025/12/13 09:11:55 Using in-cluster config to connect to apiserver
	2025/12/13 09:11:55 Using secret token for csrf signing
	2025/12/13 09:11:55 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/13 09:11:55 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/13 09:11:55 Successful initial request to the apiserver, version: v1.34.2
	2025/12/13 09:11:55 Generating JWE encryption key
	2025/12/13 09:11:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/13 09:11:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/13 09:11:55 Initializing JWE encryption key from synchronized object
	2025/12/13 09:11:55 Creating in-cluster Sidecar client
	2025/12/13 09:11:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/13 09:11:55 Serving insecurely on HTTP port: 9090
	2025/12/13 09:12:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3a6ccc8828213f06a483a07664d5f51e72982e311edfd72fc6bcd8fbd8700f7e] <==
	I1213 09:12:17.805042       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 09:12:17.813217       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 09:12:17.813274       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1213 09:12:17.815614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:12:21.270871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:12:25.531649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:12:29.130012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:12:32.184647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:12:35.206657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:12:35.210978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:12:35.211107       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 09:12:35.211199       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"008c8b69-db9b-496b-ba4e-78cdc6236358", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-361270_648cf29a-7593-4004-8d99-1e69c0f33e8a became leader
	I1213 09:12:35.211278       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-361270_648cf29a-7593-4004-8d99-1e69c0f33e8a!
	W1213 09:12:35.213059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:12:35.216894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1213 09:12:35.311604       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-361270_648cf29a-7593-4004-8d99-1e69c0f33e8a!
	W1213 09:12:37.219869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1213 09:12:37.223804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [8f8bf95e6ad87c53d72320ca46c7c701c44a0050543308bd67d34281350550ec] <==
	I1213 09:11:46.996984       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1213 09:12:17.000917       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-361270 -n default-k8s-diff-port-361270
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-361270 -n default-k8s-diff-port-361270: exit status 2 (319.162389ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-361270 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.81s)

                                                
                                    

Test pass (354/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.78
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 3.49
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.15
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 0.4
30 TestBinaryMirror 0.82
31 TestOffline 60.85
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 124.71
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/serial/GCPAuth/FakeCredentials 8.42
57 TestAddons/StoppedEnableDisable 16.63
58 TestCertOptions 29.58
59 TestCertExpiration 215.39
61 TestForceSystemdFlag 43.11
62 TestForceSystemdEnv 30.53
67 TestErrorSpam/setup 22.59
68 TestErrorSpam/start 0.64
69 TestErrorSpam/status 0.95
70 TestErrorSpam/pause 5.86
71 TestErrorSpam/unpause 5.75
72 TestErrorSpam/stop 12.52
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 41.7
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.15
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.14
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.44
84 TestFunctional/serial/CacheCmd/cache/add_local 0.9
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.52
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 42.57
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.17
95 TestFunctional/serial/LogsFileCmd 1.2
96 TestFunctional/serial/InvalidService 4.6
98 TestFunctional/parallel/ConfigCmd 0.41
99 TestFunctional/parallel/DashboardCmd 5.83
100 TestFunctional/parallel/DryRun 0.46
101 TestFunctional/parallel/InternationalLanguage 0.18
102 TestFunctional/parallel/StatusCmd 0.98
106 TestFunctional/parallel/ServiceCmdConnect 7.67
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 25.3
110 TestFunctional/parallel/SSHCmd 0.58
111 TestFunctional/parallel/CpCmd 1.78
112 TestFunctional/parallel/MySQL 21.54
113 TestFunctional/parallel/FileSync 0.31
114 TestFunctional/parallel/CertSync 1.82
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
122 TestFunctional/parallel/License 0.23
123 TestFunctional/parallel/Version/short 0.07
124 TestFunctional/parallel/Version/components 0.54
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
128 TestFunctional/parallel/ImageCommands/ImageListYaml 1.34
129 TestFunctional/parallel/ImageCommands/ImageBuild 2.98
130 TestFunctional/parallel/ImageCommands/Setup 0.4
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.28
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.87
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
138 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
139 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.3
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
143 TestFunctional/parallel/ImageCommands/ImageRemove 2.35
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.89
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.11
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
147 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
152 TestFunctional/parallel/ServiceCmd/DeployApp 6.15
153 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
154 TestFunctional/parallel/ProfileCmd/profile_list 0.39
155 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
156 TestFunctional/parallel/MountCmd/any-port 7.02
157 TestFunctional/parallel/ServiceCmd/List 1.75
158 TestFunctional/parallel/ServiceCmd/JSONOutput 1.8
159 TestFunctional/parallel/MountCmd/specific-port 2.1
160 TestFunctional/parallel/ServiceCmd/HTTPS 0.59
161 TestFunctional/parallel/ServiceCmd/Format 0.53
162 TestFunctional/parallel/ServiceCmd/URL 0.56
163 TestFunctional/parallel/MountCmd/VerifyCleanup 1.96
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 36.48
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 6.05
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.47
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 0.82
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.51
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 58.05
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.22
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.24
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 11.67
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.49
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 6.91
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.39
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.19
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.95
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 8.74
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.17
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 21.48
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.79
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.09
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 21.95
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.33
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 2.03
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.09
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.66
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.27
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.19
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.19
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.19
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.27
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.24
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 1.23
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.28
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.34
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.17
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.59
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 12.26
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.41
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 1.11
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.54
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.71
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.4
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 6.86
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.99
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.61
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 8.14
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.46
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.47
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.5
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.51
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.7
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.7
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.54
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.62
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.55
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 145.51
266 TestMultiControlPlane/serial/DeployApp 5.84
267 TestMultiControlPlane/serial/PingHostFromPods 1.03
268 TestMultiControlPlane/serial/AddWorkerNode 26.97
269 TestMultiControlPlane/serial/NodeLabels 0.06
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
271 TestMultiControlPlane/serial/CopyFile 16.97
272 TestMultiControlPlane/serial/StopSecondaryNode 19.75
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
274 TestMultiControlPlane/serial/RestartSecondaryNode 8.55
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 106.35
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.54
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
279 TestMultiControlPlane/serial/StopCluster 48.52
280 TestMultiControlPlane/serial/RestartCluster 57.16
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
282 TestMultiControlPlane/serial/AddSecondaryNode 85.42
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
288 TestJSONOutput/start/Command 38.18
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 7.99
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.23
313 TestKicCustomNetwork/create_custom_network 25.93
314 TestKicCustomNetwork/use_default_bridge_network 23.19
315 TestKicExistingNetwork 25.43
316 TestKicCustomSubnet 25.83
317 TestKicStaticIP 23.16
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 44.02
322 TestMountStart/serial/StartWithMountFirst 7.95
323 TestMountStart/serial/VerifyMountFirst 0.27
324 TestMountStart/serial/StartWithMountSecond 4.7
325 TestMountStart/serial/VerifyMountSecond 0.27
326 TestMountStart/serial/DeleteFirst 1.67
327 TestMountStart/serial/VerifyMountPostDelete 0.26
328 TestMountStart/serial/Stop 1.26
329 TestMountStart/serial/RestartStopped 7.12
330 TestMountStart/serial/VerifyMountPostStop 0.26
333 TestMultiNode/serial/FreshStart2Nodes 92.69
334 TestMultiNode/serial/DeployApp2Nodes 3.87
335 TestMultiNode/serial/PingHostFrom2Pods 0.72
336 TestMultiNode/serial/AddNode 22.46
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.65
339 TestMultiNode/serial/CopyFile 9.76
340 TestMultiNode/serial/StopNode 2.26
341 TestMultiNode/serial/StartAfterStop 7.3
342 TestMultiNode/serial/RestartKeepsNodes 81.65
343 TestMultiNode/serial/DeleteNode 5.22
344 TestMultiNode/serial/StopMultiNode 30.39
345 TestMultiNode/serial/RestartMultiNode 51.77
346 TestMultiNode/serial/ValidateNameConflict 25.69
351 TestPreload 96.95
353 TestScheduledStopUnix 98.4
356 TestInsufficientStorage 11.75
357 TestRunningBinaryUpgrade 52
359 TestKubernetesUpgrade 390.45
360 TestMissingContainerUpgrade 90.9
362 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
363 TestNoKubernetes/serial/StartWithK8s 34.34
364 TestNoKubernetes/serial/StartWithStopK8s 16.63
372 TestNetworkPlugins/group/false 3.57
376 TestNoKubernetes/serial/Start 7.96
377 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
378 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
379 TestNoKubernetes/serial/ProfileList 3.13
380 TestNoKubernetes/serial/Stop 2.69
381 TestNoKubernetes/serial/StartNoArgs 6.94
382 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
383 TestStoppedBinaryUpgrade/Setup 0.57
384 TestStoppedBinaryUpgrade/Upgrade 40.27
393 TestPause/serial/Start 71.55
394 TestStoppedBinaryUpgrade/MinikubeLogs 1.17
395 TestNetworkPlugins/group/auto/Start 41.63
396 TestNetworkPlugins/group/auto/KubeletFlags 0.29
397 TestNetworkPlugins/group/auto/NetCatPod 8.24
398 TestNetworkPlugins/group/auto/DNS 0.1
399 TestNetworkPlugins/group/auto/Localhost 0.08
400 TestNetworkPlugins/group/auto/HairPin 0.08
401 TestPause/serial/SecondStartNoReconfiguration 6.18
403 TestNetworkPlugins/group/kindnet/Start 41.7
404 TestNetworkPlugins/group/calico/Start 46.34
405 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
406 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
407 TestNetworkPlugins/group/kindnet/NetCatPod 9.17
408 TestNetworkPlugins/group/calico/ControllerPod 6.01
409 TestNetworkPlugins/group/kindnet/DNS 0.11
410 TestNetworkPlugins/group/kindnet/Localhost 0.09
411 TestNetworkPlugins/group/kindnet/HairPin 0.09
412 TestNetworkPlugins/group/calico/KubeletFlags 0.31
413 TestNetworkPlugins/group/custom-flannel/Start 51.6
414 TestNetworkPlugins/group/calico/NetCatPod 9.25
415 TestNetworkPlugins/group/calico/DNS 0.13
416 TestNetworkPlugins/group/calico/Localhost 0.12
417 TestNetworkPlugins/group/calico/HairPin 0.11
418 TestNetworkPlugins/group/enable-default-cni/Start 66.92
419 TestNetworkPlugins/group/flannel/Start 49.22
420 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
421 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.21
422 TestNetworkPlugins/group/custom-flannel/DNS 0.15
423 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
424 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
425 TestNetworkPlugins/group/bridge/Start 62.08
426 TestNetworkPlugins/group/flannel/ControllerPod 6
427 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
428 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
429 TestNetworkPlugins/group/flannel/NetCatPod 9.18
430 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.19
431 TestNetworkPlugins/group/flannel/DNS 0.11
432 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
433 TestNetworkPlugins/group/flannel/Localhost 0.1
434 TestNetworkPlugins/group/flannel/HairPin 0.1
435 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
436 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
438 TestStartStop/group/old-k8s-version/serial/FirstStart 49.97
440 TestStartStop/group/no-preload/serial/FirstStart 46.96
441 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
442 TestNetworkPlugins/group/bridge/NetCatPod 8.21
443 TestNetworkPlugins/group/bridge/DNS 0.14
444 TestNetworkPlugins/group/bridge/Localhost 0.12
445 TestNetworkPlugins/group/bridge/HairPin 0.12
446 TestStartStop/group/no-preload/serial/DeployApp 8.39
447 TestStartStop/group/old-k8s-version/serial/DeployApp 8.3
449 TestStartStop/group/embed-certs/serial/FirstStart 37.32
452 TestStartStop/group/no-preload/serial/Stop 16.34
453 TestStartStop/group/old-k8s-version/serial/Stop 16.08
454 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
455 TestStartStop/group/no-preload/serial/SecondStart 49.78
456 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
457 TestStartStop/group/old-k8s-version/serial/SecondStart 52.37
459 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.18
460 TestStartStop/group/embed-certs/serial/DeployApp 10.3
462 TestStartStop/group/embed-certs/serial/Stop 18.17
463 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
464 TestStartStop/group/embed-certs/serial/SecondStart 45.26
465 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
466 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
467 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.36
468 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
469 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
470 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
473 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
474 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.38
477 TestStartStop/group/newest-cni/serial/FirstStart 22.03
478 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
479 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.12
480 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
481 TestStartStop/group/newest-cni/serial/DeployApp 0
483 TestStartStop/group/newest-cni/serial/Stop 8.48
484 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
485 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
487 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
488 TestStartStop/group/newest-cni/serial/SecondStart 10.48
489 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
490 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
491 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
493 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
494 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
495 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
x
+
TestDownloadOnly/v1.28.0/json-events (4.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-202898 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-202898 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.776593583s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1213 08:28:48.501298    9303 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1213 08:28:48.501389    9303 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-202898
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-202898: exit status 85 (73.016567ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-202898 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-202898 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:28:43
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:28:43.778664    9314 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:28:43.778861    9314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:28:43.778869    9314 out.go:374] Setting ErrFile to fd 2...
	I1213 08:28:43.778874    9314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:28:43.779065    9314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	W1213 08:28:43.779176    9314 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22128-5776/.minikube/config/config.json: open /home/jenkins/minikube-integration/22128-5776/.minikube/config/config.json: no such file or directory
	I1213 08:28:43.779654    9314 out.go:368] Setting JSON to true
	I1213 08:28:43.780523    9314 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":676,"bootTime":1765613848,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:28:43.780581    9314 start.go:143] virtualization: kvm guest
	I1213 08:28:43.784591    9314 out.go:99] [download-only-202898] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 08:28:43.784715    9314 notify.go:221] Checking for updates...
	W1213 08:28:43.784723    9314 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 08:28:43.786119    9314 out.go:171] MINIKUBE_LOCATION=22128
	I1213 08:28:43.787527    9314 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:28:43.788858    9314 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 08:28:43.790180    9314 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 08:28:43.791412    9314 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 08:28:43.793770    9314 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 08:28:43.794060    9314 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:28:43.821099    9314 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 08:28:43.821182    9314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:28:44.049322    9314 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-13 08:28:44.039299725 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 08:28:44.049442    9314 docker.go:319] overlay module found
	I1213 08:28:44.051065    9314 out.go:99] Using the docker driver based on user configuration
	I1213 08:28:44.051093    9314 start.go:309] selected driver: docker
	I1213 08:28:44.051099    9314 start.go:927] validating driver "docker" against <nil>
	I1213 08:28:44.051181    9314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:28:44.105287    9314 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-13 08:28:44.09609604 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 08:28:44.105444    9314 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:28:44.105996    9314 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1213 08:28:44.106153    9314 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 08:28:44.107768    9314 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-202898 host does not exist
	  To start a cluster, run: "minikube start -p download-only-202898"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-202898
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (3.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-109226 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-109226 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.48782844s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (3.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1213 08:28:52.438464    9303 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1213 08:28:52.438518    9303 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-109226
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-109226: exit status 85 (74.635103ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-202898 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-202898 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-202898                                                                                                                                                   │ download-only-202898 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-109226 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-109226 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:28:49
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:28:49.000556    9688 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:28:49.000806    9688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:28:49.000817    9688 out.go:374] Setting ErrFile to fd 2...
	I1213 08:28:49.000823    9688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:28:49.001042    9688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:28:49.001502    9688 out.go:368] Setting JSON to true
	I1213 08:28:49.002573    9688 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":681,"bootTime":1765613848,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:28:49.002678    9688 start.go:143] virtualization: kvm guest
	I1213 08:28:49.004507    9688 out.go:99] [download-only-109226] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 08:28:49.004640    9688 notify.go:221] Checking for updates...
	I1213 08:28:49.005950    9688 out.go:171] MINIKUBE_LOCATION=22128
	I1213 08:28:49.007270    9688 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:28:49.008724    9688 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 08:28:49.010211    9688 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 08:28:49.011673    9688 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 08:28:49.014379    9688 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 08:28:49.014607    9688 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:28:49.038128    9688 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 08:28:49.038235    9688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:28:49.094877    9688 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-13 08:28:49.085478894 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 08:28:49.095023    9688 docker.go:319] overlay module found
	I1213 08:28:49.096622    9688 out.go:99] Using the docker driver based on user configuration
	I1213 08:28:49.096657    9688 start.go:309] selected driver: docker
	I1213 08:28:49.096683    9688 start.go:927] validating driver "docker" against <nil>
	I1213 08:28:49.096769    9688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:28:49.152954    9688 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-13 08:28:49.143552015 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 08:28:49.153171    9688 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:28:49.153695    9688 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1213 08:28:49.153853    9688 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 08:28:49.155575    9688 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-109226 host does not exist
	  To start a cluster, run: "minikube start -p download-only-109226"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-109226
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-124765 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-124765 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.153800707s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1213 08:28:56.031340    9303 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1213 08:28:56.031382    9303 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-124765
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-124765: exit status 85 (75.836944ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-202898 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-202898 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-202898                                                                                                                                                          │ download-only-202898 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-109226 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-109226 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-109226                                                                                                                                                          │ download-only-109226 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │ 13 Dec 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-124765 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-124765 │ jenkins │ v1.37.0 │ 13 Dec 25 08:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/13 08:28:52
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 08:28:52.927543   10029 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:28:52.927801   10029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:28:52.927811   10029 out.go:374] Setting ErrFile to fd 2...
	I1213 08:28:52.927815   10029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:28:52.928055   10029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:28:52.928554   10029 out.go:368] Setting JSON to true
	I1213 08:28:52.929379   10029 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":685,"bootTime":1765613848,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:28:52.929428   10029 start.go:143] virtualization: kvm guest
	I1213 08:28:52.931075   10029 out.go:99] [download-only-124765] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 08:28:52.931213   10029 notify.go:221] Checking for updates...
	I1213 08:28:52.932447   10029 out.go:171] MINIKUBE_LOCATION=22128
	I1213 08:28:52.933816   10029 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:28:52.935009   10029 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 08:28:52.936420   10029 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 08:28:52.937698   10029 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1213 08:28:52.939873   10029 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 08:28:52.940173   10029 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:28:52.963535   10029 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 08:28:52.963648   10029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:28:53.018839   10029 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-13 08:28:53.008124742 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 08:28:53.018955   10029 docker.go:319] overlay module found
	I1213 08:28:53.020512   10029 out.go:99] Using the docker driver based on user configuration
	I1213 08:28:53.020548   10029 start.go:309] selected driver: docker
	I1213 08:28:53.020554   10029 start.go:927] validating driver "docker" against <nil>
	I1213 08:28:53.020626   10029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:28:53.077102   10029 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-13 08:28:53.068243017 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 08:28:53.077245   10029 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1213 08:28:53.077730   10029 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1213 08:28:53.077897   10029 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 08:28:53.079480   10029 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-124765 host does not exist
	  To start a cluster, run: "minikube start -p download-only-124765"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-124765
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-239724 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-239724" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-239724
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 08:28:57.301527    9303 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-949734 --alsologtostderr --binary-mirror http://127.0.0.1:46283 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-949734" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-949734
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (60.85s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-403965 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-403965 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (57.157938081s)
helpers_test.go:176: Cleaning up "offline-crio-403965" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-403965
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-403965: (3.692361367s)
--- PASS: TestOffline (60.85s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-916029
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-916029: exit status 85 (66.135261ms)

                                                
                                                
-- stdout --
	* Profile "addons-916029" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-916029"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-916029
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-916029: exit status 85 (62.263977ms)

                                                
                                                
-- stdout --
	* Profile "addons-916029" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-916029"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (124.71s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-916029 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-916029 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m4.712440083s)
--- PASS: TestAddons/Setup (124.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-916029 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-916029 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.42s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-916029 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-916029 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [19e93556-7441-4a02-80d8-8b2015579721] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [19e93556-7441-4a02-80d8-8b2015579721] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003705404s
addons_test.go:696: (dbg) Run:  kubectl --context addons-916029 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-916029 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-916029 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.63s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-916029
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-916029: (16.347425429s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-916029
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-916029
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-916029
--- PASS: TestAddons/StoppedEnableDisable (16.63s)

                                                
                                    
x
+
TestCertOptions (29.58s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-636635 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-636635 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (26.391668055s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-636635 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-636635 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-636635 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-636635" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-636635
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-636635: (2.502280085s)
--- PASS: TestCertOptions (29.58s)

                                                
                                    
x
+
TestCertExpiration (215.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-031891 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-031891 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (26.7285086s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-031891 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-031891 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.189286024s)
helpers_test.go:176: Cleaning up "cert-expiration-031891" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-031891
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-031891: (2.47545388s)
--- PASS: TestCertExpiration (215.39s)

                                                
                                    
x
+
TestForceSystemdFlag (43.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-482530 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-482530 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.322511008s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-482530 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-482530" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-482530
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-482530: (2.497848914s)
--- PASS: TestForceSystemdFlag (43.11s)

                                                
                                    
x
+
TestForceSystemdEnv (30.53s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-135689 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-135689 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.781973649s)
helpers_test.go:176: Cleaning up "force-systemd-env-135689" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-135689
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-135689: (2.750247141s)
--- PASS: TestForceSystemdEnv (30.53s)

                                                
                                    
x
+
TestErrorSpam/setup (22.59s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-704024 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-704024 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-704024 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-704024 --driver=docker  --container-runtime=crio: (22.591079946s)
--- PASS: TestErrorSpam/setup (22.59s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (5.86s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 pause: exit status 80 (2.355451257s)

                                                
                                                
-- stdout --
	* Pausing node nospam-704024 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:34:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 pause: exit status 80 (1.481760669s)

                                                
                                                
-- stdout --
	* Pausing node nospam-704024 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:34:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 pause: exit status 80 (2.026496041s)

                                                
                                                
-- stdout --
	* Pausing node nospam-704024 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:34:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.86s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 unpause: exit status 80 (1.718556723s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-704024 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:34:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 unpause: exit status 80 (1.74216864s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-704024 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:34:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 unpause: exit status 80 (2.292307449s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-704024 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-13T08:34:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.75s)

                                                
                                    
x
+
TestErrorSpam/stop (12.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 stop: (12.314814995s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704024 --log_dir /tmp/nospam-704024 stop
--- PASS: TestErrorSpam/stop (12.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/test/nested/copy/9303/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.7s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-413795 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-413795 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (41.702542197s)
--- PASS: TestFunctional/serial/StartWithProxy (41.70s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 08:35:49.103905    9303 config.go:182] Loaded profile config "functional-413795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-413795 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-413795 --alsologtostderr -v=8: (6.1520459s)
functional_test.go:678: soft start took 6.153144417s for "functional-413795" cluster.
I1213 08:35:55.256782    9303 config.go:182] Loaded profile config "functional-413795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (6.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-413795 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-413795 /tmp/TestFunctionalserialCacheCmdcacheadd_local1432138685/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 cache add minikube-local-cache-test:functional-413795
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 cache delete minikube-local-cache-test:functional-413795
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-413795
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413795 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (285.987766ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 kubectl -- --context functional-413795 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-413795 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-413795 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 08:36:03.549329    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:03.555805    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:03.567167    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:03.588620    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:03.630034    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:03.711497    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:03.873740    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:04.195144    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:04.836693    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:06.118291    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:08.679990    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:13.801707    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:36:24.043220    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-413795 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.567520689s)
functional_test.go:776: restart took 42.567649716s for "functional-413795" cluster.
I1213 08:36:43.643009    9303 config.go:182] Loaded profile config "functional-413795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (42.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-413795 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 logs
E1213 08:36:44.525473    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-413795 logs: (1.174101867s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 logs --file /tmp/TestFunctionalserialLogsFileCmd2566015849/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-413795 logs --file /tmp/TestFunctionalserialLogsFileCmd2566015849/001/logs.txt: (1.195352818s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.6s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-413795 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-413795
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-413795: exit status 115 (340.134971ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30382 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-413795 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-413795 delete -f testdata/invalidsvc.yaml: (1.095880428s)
--- PASS: TestFunctional/serial/InvalidService (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413795 config get cpus: exit status 14 (66.026414ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413795 config get cpus: exit status 14 (70.543172ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-413795 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-413795 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 47350: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (5.83s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-413795 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-413795 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (207.164147ms)

                                                
                                                
-- stdout --
	* [functional-413795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:37:16.238618   47628 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:37:16.238872   47628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:37:16.238884   47628 out.go:374] Setting ErrFile to fd 2...
	I1213 08:37:16.238891   47628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:37:16.239347   47628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:37:16.239959   47628 out.go:368] Setting JSON to false
	I1213 08:37:16.241259   47628 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1188,"bootTime":1765613848,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:37:16.241332   47628 start.go:143] virtualization: kvm guest
	I1213 08:37:16.243557   47628 out.go:179] * [functional-413795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 08:37:16.245047   47628 notify.go:221] Checking for updates...
	I1213 08:37:16.245179   47628 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:37:16.246924   47628 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:37:16.248589   47628 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 08:37:16.250342   47628 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 08:37:16.254978   47628 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 08:37:16.256125   47628 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:37:16.258091   47628 config.go:182] Loaded profile config "functional-413795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:37:16.258856   47628 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:37:16.287244   47628 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 08:37:16.287384   47628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:37:16.358228   47628 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 08:37:16.345162452 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 08:37:16.358373   47628 docker.go:319] overlay module found
	I1213 08:37:16.360067   47628 out.go:179] * Using the docker driver based on existing profile
	I1213 08:37:16.361231   47628 start.go:309] selected driver: docker
	I1213 08:37:16.361248   47628 start.go:927] validating driver "docker" against &{Name:functional-413795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-413795 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:37:16.361370   47628 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:37:16.363599   47628 out.go:203] 
	W1213 08:37:16.364780   47628 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 08:37:16.365824   47628 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-413795 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-413795 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-413795 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (182.00022ms)

                                                
                                                
-- stdout --
	* [functional-413795] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:37:16.053073   47470 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:37:16.053209   47470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:37:16.053221   47470 out.go:374] Setting ErrFile to fd 2...
	I1213 08:37:16.053228   47470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:37:16.053650   47470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:37:16.054226   47470 out.go:368] Setting JSON to false
	I1213 08:37:16.055437   47470 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1188,"bootTime":1765613848,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:37:16.055523   47470 start.go:143] virtualization: kvm guest
	I1213 08:37:16.057365   47470 out.go:179] * [functional-413795] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 08:37:16.058902   47470 notify.go:221] Checking for updates...
	I1213 08:37:16.058910   47470 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:37:16.060284   47470 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:37:16.061606   47470 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 08:37:16.062898   47470 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 08:37:16.064397   47470 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 08:37:16.066639   47470 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:37:16.068390   47470 config.go:182] Loaded profile config "functional-413795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:37:16.068949   47470 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:37:16.092479   47470 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 08:37:16.092572   47470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:37:16.150030   47470 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 08:37:16.139635018 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 08:37:16.150136   47470 docker.go:319] overlay module found
	I1213 08:37:16.151965   47470 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 08:37:16.153228   47470 start.go:309] selected driver: docker
	I1213 08:37:16.153243   47470 start.go:927] validating driver "docker" against &{Name:functional-413795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-413795 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:37:16.153331   47470 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:37:16.155273   47470 out.go:203] 
	W1213 08:37:16.156876   47470 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 08:37:16.157968   47470 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-413795 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-413795 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-4z6cb" [0001ad9f-6275-4f3d-a9f3-33fb81798d9d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-4z6cb" [0001ad9f-6275-4f3d-a9f3-33fb81798d9d] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003686023s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31573
functional_test.go:1680: http://192.168.49.2:31573: success! body:
Request served by hello-node-connect-7d85dfc575-4z6cb

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31573
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [03916be5-a57c-48b3-8d4d-ae6ee0b3e1a0] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003973446s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-413795 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-413795 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-413795 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-413795 apply -f testdata/storage-provisioner/pod.yaml
I1213 08:36:57.168025    9303 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [e653a5d9-eed0-4c22-9b05-6dba5f3f6307] Pending
helpers_test.go:353: "sp-pod" [e653a5d9-eed0-4c22-9b05-6dba5f3f6307] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [e653a5d9-eed0-4c22-9b05-6dba5f3f6307] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.002797628s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-413795 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-413795 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-413795 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [7d92a35f-0f64-4aff-8f27-479531315336] Pending
helpers_test.go:353: "sp-pod" [7d92a35f-0f64-4aff-8f27-479531315336] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003439343s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-413795 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.30s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh -n functional-413795 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 cp functional-413795:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd269642274/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh -n functional-413795 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh -n functional-413795 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-413795 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-zlhwt" [07fd0c55-6ead-463b-98e1-b5071ccd2ef2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-zlhwt" [07fd0c55-6ead-463b-98e1-b5071ccd2ef2] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.00373189s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-413795 exec mysql-6bcdcbc558-zlhwt -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-413795 exec mysql-6bcdcbc558-zlhwt -- mysql -ppassword -e "show databases;": exit status 1 (96.144016ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:37:08.462232    9303 retry.go:31] will retry after 597.811366ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-413795 exec mysql-6bcdcbc558-zlhwt -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-413795 exec mysql-6bcdcbc558-zlhwt -- mysql -ppassword -e "show databases;": exit status 1 (88.353393ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:37:09.149402    9303 retry.go:31] will retry after 2.056743248s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-413795 exec mysql-6bcdcbc558-zlhwt -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-413795 exec mysql-6bcdcbc558-zlhwt -- mysql -ppassword -e "show databases;": exit status 1 (99.737404ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:37:11.307040    9303 retry.go:31] will retry after 3.316493611s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-413795 exec mysql-6bcdcbc558-zlhwt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.54s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9303/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "sudo cat /etc/test/nested/copy/9303/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9303.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "sudo cat /etc/ssl/certs/9303.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9303.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "sudo cat /usr/share/ca-certificates/9303.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/93032.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "sudo cat /etc/ssl/certs/93032.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/93032.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "sudo cat /usr/share/ca-certificates/93032.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-413795 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413795 ssh "sudo systemctl is-active docker": exit status 1 (291.910869ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413795 ssh "sudo systemctl is-active containerd": exit status 1 (311.818886ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-413795 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-413795
localhost/kicbase/echo-server:functional-413795
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-413795 image ls --format short --alsologtostderr:
I1213 08:37:18.013053   48424 out.go:360] Setting OutFile to fd 1 ...
I1213 08:37:18.013175   48424 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:37:18.013188   48424 out.go:374] Setting ErrFile to fd 2...
I1213 08:37:18.013194   48424 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:37:18.013453   48424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
I1213 08:37:18.014223   48424 config.go:182] Loaded profile config "functional-413795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:37:18.014367   48424 config.go:182] Loaded profile config "functional-413795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:37:18.014929   48424 cli_runner.go:164] Run: docker container inspect functional-413795 --format={{.State.Status}}
I1213 08:37:18.038447   48424 ssh_runner.go:195] Run: systemctl --version
I1213 08:37:18.038538   48424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-413795
I1213 08:37:18.060362   48424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/functional-413795/id_rsa Username:docker}
I1213 08:37:18.167714   48424 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-413795 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-413795  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-413795  │ 4f36974bbeab2 │ 3.33kB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-413795 image ls --format table --alsologtostderr:
I1213 08:37:20.858907   49873 out.go:360] Setting OutFile to fd 1 ...
I1213 08:37:20.859167   49873 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:37:20.859177   49873 out.go:374] Setting ErrFile to fd 2...
I1213 08:37:20.859181   49873 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:37:20.859373   49873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
I1213 08:37:20.860015   49873 config.go:182] Loaded profile config "functional-413795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:37:20.860140   49873 config.go:182] Loaded profile config "functional-413795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:37:20.860721   49873 cli_runner.go:164] Run: docker container inspect functional-413795 --format={{.State.Status}}
I1213 08:37:20.879839   49873 ssh_runner.go:195] Run: systemctl --version
I1213 08:37:20.879895   49873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-413795
I1213 08:37:20.901229   49873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/functional-413795/id_rsa Username:docker}
I1213 08:37:20.996992   49873 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-413795 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-413795"],"size":"4943877"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1
d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e
314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6cc
d04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@s
ha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a
851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"4f36974bbeab27303a84e8f03c1cba8358894ef8c3a66106d07fcf1878fc1b2d","repoDigests":["localhost/minikube-local-cache-test@sha256:6630d8238509ed25a5453990fee335a0860a804abf643aab8c291c119ca966ef"],"repoTags":["localhost/minikube-local-cache-test:functional-413795"],"size":"3330"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269b
a217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-413795 image ls --format json --alsologtostderr:
I1213 08:37:20.603463   49645 out.go:360] Setting OutFile to fd 1 ...
I1213 08:37:20.603731   49645 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:37:20.603742   49645 out.go:374] Setting ErrFile to fd 2...
I1213 08:37:20.603746   49645 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:37:20.603940   49645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
I1213 08:37:20.604515   49645 config.go:182] Loaded profile config "functional-413795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:37:20.604611   49645 config.go:182] Loaded profile config "functional-413795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:37:20.605026   49645 cli_runner.go:164] Run: docker container inspect functional-413795 --format={{.State.Status}}
I1213 08:37:20.626637   49645 ssh_runner.go:195] Run: systemctl --version
I1213 08:37:20.626704   49645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-413795
I1213 08:37:20.648790   49645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/functional-413795/id_rsa Username:docker}
I1213 08:37:20.753516   49645 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-413795 image ls --format yaml --alsologtostderr: (1.338632962s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-413795 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-413795
size: "4943877"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 4f36974bbeab27303a84e8f03c1cba8358894ef8c3a66106d07fcf1878fc1b2d
repoDigests:
- localhost/minikube-local-cache-test@sha256:6630d8238509ed25a5453990fee335a0860a804abf643aab8c291c119ca966ef
repoTags:
- localhost/minikube-local-cache-test:functional-413795
size: "3330"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-413795 image ls --format yaml --alsologtostderr:
I1213 08:37:18.284478   48499 out.go:360] Setting OutFile to fd 1 ...
I1213 08:37:18.284731   48499 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:37:18.284739   48499 out.go:374] Setting ErrFile to fd 2...
I1213 08:37:18.284743   48499 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:37:18.284927   48499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
I1213 08:37:18.285412   48499 config.go:182] Loaded profile config "functional-413795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:37:18.285529   48499 config.go:182] Loaded profile config "functional-413795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:37:18.285944   48499 cli_runner.go:164] Run: docker container inspect functional-413795 --format={{.State.Status}}
I1213 08:37:18.305697   48499 ssh_runner.go:195] Run: systemctl --version
I1213 08:37:18.305753   48499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-413795
I1213 08:37:18.327110   48499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/functional-413795/id_rsa Username:docker}
I1213 08:37:18.433261   48499 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 08:37:19.541153   48499 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.10785713s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413795 ssh pgrep buildkitd: exit status 1 (307.784784ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image build -t localhost/my-image:functional-413795 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-413795 image build -t localhost/my-image:functional-413795 testdata/build --alsologtostderr: (2.455133299s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-413795 image build -t localhost/my-image:functional-413795 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f3d93598fed
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-413795
--> cf0ab9b6f98
Successfully tagged localhost/my-image:functional-413795
cf0ab9b6f98e500347236de89f976aa850b8ead2df1f0f43c5ca0088cd307472
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-413795 image build -t localhost/my-image:functional-413795 testdata/build --alsologtostderr:
I1213 08:37:19.917886   49344 out.go:360] Setting OutFile to fd 1 ...
I1213 08:37:19.918027   49344 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:37:19.918037   49344 out.go:374] Setting ErrFile to fd 2...
I1213 08:37:19.918041   49344 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:37:19.918211   49344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
I1213 08:37:19.918750   49344 config.go:182] Loaded profile config "functional-413795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:37:19.919329   49344 config.go:182] Loaded profile config "functional-413795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1213 08:37:19.919768   49344 cli_runner.go:164] Run: docker container inspect functional-413795 --format={{.State.Status}}
I1213 08:37:19.938182   49344 ssh_runner.go:195] Run: systemctl --version
I1213 08:37:19.938246   49344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-413795
I1213 08:37:19.955147   49344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/functional-413795/id_rsa Username:docker}
I1213 08:37:20.050097   49344 build_images.go:162] Building image from path: /tmp/build.3871569682.tar
I1213 08:37:20.050156   49344 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 08:37:20.058734   49344 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3871569682.tar
I1213 08:37:20.062740   49344 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3871569682.tar: stat -c "%s %y" /var/lib/minikube/build/build.3871569682.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3871569682.tar': No such file or directory
I1213 08:37:20.062782   49344 ssh_runner.go:362] scp /tmp/build.3871569682.tar --> /var/lib/minikube/build/build.3871569682.tar (3072 bytes)
I1213 08:37:20.082070   49344 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3871569682
I1213 08:37:20.089706   49344 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3871569682 -xf /var/lib/minikube/build/build.3871569682.tar
I1213 08:37:20.098351   49344 crio.go:315] Building image: /var/lib/minikube/build/build.3871569682
I1213 08:37:20.098402   49344 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-413795 /var/lib/minikube/build/build.3871569682 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1213 08:37:22.291596   49344 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-413795 /var/lib/minikube/build/build.3871569682 --cgroup-manager=cgroupfs: (2.19316466s)
I1213 08:37:22.291667   49344 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3871569682
I1213 08:37:22.300000   49344 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3871569682.tar
I1213 08:37:22.307706   49344 build_images.go:218] Built localhost/my-image:functional-413795 from /tmp/build.3871569682.tar
I1213 08:37:22.307730   49344 build_images.go:134] succeeded building to: functional-413795
I1213 08:37:22.307735   49344 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-413795
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image load --daemon kicbase/echo-server:functional-413795 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-413795 image load --daemon kicbase/echo-server:functional-413795 --alsologtostderr: (1.041382819s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image load --daemon kicbase/echo-server:functional-413795 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-413795
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image load --daemon kicbase/echo-server:functional-413795 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-413795 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-413795 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-413795 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-413795 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 43885: os: process already finished
helpers_test.go:526: unable to kill pid 43668: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-413795 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-413795 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [15c79236-e303-42f2-a450-81ab1a1ef2cf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [15c79236-e303-42f2-a450-81ab1a1ef2cf] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.003469096s
I1213 08:37:09.575387    9303 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image save kicbase/echo-server:functional-413795 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image rm kicbase/echo-server:functional-413795 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-413795 image rm kicbase/echo-server:functional-413795 --alsologtostderr: (1.246248892s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-413795 image ls: (1.105264241s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-413795 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.665400473s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-413795
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 image save --daemon kicbase/echo-server:functional-413795 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-413795 image save --daemon kicbase/echo-server:functional-413795 --alsologtostderr: (1.074078676s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-413795
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-413795 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.79.99 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-413795 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-413795 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-413795 expose deployment hello-node --type=NodePort --port=8080
I1213 08:37:09.878934    9303 detect.go:223] nested VM detected
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-zk7bh" [c01601ab-3e82-4629-b0e3-ab3d91dcf16f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-zk7bh" [c01601ab-3e82-4629-b0e3-ab3d91dcf16f] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.002475067s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "331.956902ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "60.599854ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "326.320966ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "59.423581ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-413795 /tmp/TestFunctionalparallelMountCmdany-port470829513/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765615032298303685" to /tmp/TestFunctionalparallelMountCmdany-port470829513/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765615032298303685" to /tmp/TestFunctionalparallelMountCmdany-port470829513/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765615032298303685" to /tmp/TestFunctionalparallelMountCmdany-port470829513/001/test-1765615032298303685
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413795 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (273.053859ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:37:12.571600    9303 retry.go:31] will retry after 650.782564ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 08:37 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 08:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 08:37 test-1765615032298303685
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh cat /mount-9p/test-1765615032298303685
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-413795 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [f383538b-8fd7-4706-b905-fe5c08533a42] Pending
helpers_test.go:353: "busybox-mount" [f383538b-8fd7-4706-b905-fe5c08533a42] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [f383538b-8fd7-4706-b905-fe5c08533a42] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [f383538b-8fd7-4706-b905-fe5c08533a42] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003618301s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-413795 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-413795 /tmp/TestFunctionalparallelMountCmdany-port470829513/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-413795 service list: (1.748943408s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-413795 service list -o json: (1.800077919s)
functional_test.go:1504: Took "1.800172455s" to run "out/minikube-linux-amd64 -p functional-413795 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-413795 /tmp/TestFunctionalparallelMountCmdspecific-port3628361136/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413795 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (339.95356ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:37:19.657419    9303 retry.go:31] will retry after 712.214674ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-413795 /tmp/TestFunctionalparallelMountCmdspecific-port3628361136/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413795 ssh "sudo umount -f /mount-9p": exit status 1 (265.956025ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-413795 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-413795 /tmp/TestFunctionalparallelMountCmdspecific-port3628361136/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30514
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 service hello-node --url --format={{.IP}}
2025/12/13 08:37:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30514
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-413795 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3754080493/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-413795 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3754080493/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-413795 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3754080493/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-413795 ssh "findmnt -T" /mount1: exit status 1 (369.92557ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:37:21.790124    9303 retry.go:31] will retry after 730.421705ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-413795 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-413795 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-413795 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3754080493/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-413795 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3754080493/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-413795 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3754080493/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-413795
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-413795
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-413795
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22128-5776/.minikube/files/etc/test/nested/copy/9303/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (36.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-331564 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-331564 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (36.477903721s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (36.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1213 08:38:03.614810    9303 config.go:182] Loaded profile config "functional-331564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-331564 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-331564 --alsologtostderr -v=8: (6.053475546s)
functional_test.go:678: soft start took 6.05380308s for "functional-331564" cluster.
I1213 08:38:09.668630    9303 config.go:182] Loaded profile config "functional-331564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (6.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-331564 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-331564 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2974868170/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 cache add minikube-local-cache-test:functional-331564
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 cache delete minikube-local-cache-test:functional-331564
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-331564
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-331564 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (278.284423ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 kubectl -- --context functional-331564 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-331564 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (58.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-331564 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1213 08:38:47.410995    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-331564 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (58.053940566s)
functional_test.go:776: restart took 58.054054916s for "functional-331564" cluster.
I1213 08:39:13.392145    9303 config.go:182] Loaded profile config "functional-331564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (58.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-331564 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-331564 logs: (1.218372351s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2327456317/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-331564 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2327456317/001/logs.txt: (1.239114789s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (11.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-331564 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-331564
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-331564: exit status 115 (333.336512ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30380 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-331564 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-331564 delete -f testdata/invalidsvc.yaml: (8.167519982s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (11.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-331564 config get cpus: exit status 14 (82.827927ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-331564 config get cpus: exit status 14 (87.186803ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (6.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-331564 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-331564 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 67544: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (6.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-331564 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-331564 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (168.827337ms)

                                                
                                                
-- stdout --
	* [functional-331564] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:39:43.728418   65144 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:39:43.728551   65144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:39:43.728561   65144 out.go:374] Setting ErrFile to fd 2...
	I1213 08:39:43.728566   65144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:39:43.728826   65144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:39:43.729267   65144 out.go:368] Setting JSON to false
	I1213 08:39:43.730299   65144 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1336,"bootTime":1765613848,"procs":259,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:39:43.730351   65144 start.go:143] virtualization: kvm guest
	I1213 08:39:43.732936   65144 out.go:179] * [functional-331564] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 08:39:43.734204   65144 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:39:43.734206   65144 notify.go:221] Checking for updates...
	I1213 08:39:43.736601   65144 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:39:43.737992   65144 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 08:39:43.743019   65144 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 08:39:43.744311   65144 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 08:39:43.745475   65144 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:39:43.747085   65144 config.go:182] Loaded profile config "functional-331564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 08:39:43.747646   65144 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:39:43.773528   65144 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 08:39:43.773633   65144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:39:43.828435   65144 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 08:39:43.817757081 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 08:39:43.828565   65144 docker.go:319] overlay module found
	I1213 08:39:43.830396   65144 out.go:179] * Using the docker driver based on existing profile
	I1213 08:39:43.831702   65144 start.go:309] selected driver: docker
	I1213 08:39:43.831716   65144 start.go:927] validating driver "docker" against &{Name:functional-331564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331564 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:39:43.831792   65144 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:39:43.833605   65144 out.go:203] 
	W1213 08:39:43.834694   65144 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 08:39:43.835891   65144 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-331564 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-331564 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-331564 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (189.326878ms)

                                                
                                                
-- stdout --
	* [functional-331564] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:39:43.370685   64999 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:39:43.370830   64999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:39:43.370844   64999 out.go:374] Setting ErrFile to fd 2...
	I1213 08:39:43.370850   64999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:39:43.371256   64999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:39:43.371832   64999 out.go:368] Setting JSON to false
	I1213 08:39:43.373121   64999 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1335,"bootTime":1765613848,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 08:39:43.373189   64999 start.go:143] virtualization: kvm guest
	I1213 08:39:43.375397   64999 out.go:179] * [functional-331564] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1213 08:39:43.377241   64999 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 08:39:43.377246   64999 notify.go:221] Checking for updates...
	I1213 08:39:43.378780   64999 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 08:39:43.380615   64999 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 08:39:43.383273   64999 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 08:39:43.385259   64999 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 08:39:43.386715   64999 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 08:39:43.388796   64999 config.go:182] Loaded profile config "functional-331564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1213 08:39:43.389512   64999 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 08:39:43.420110   64999 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 08:39:43.420221   64999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:39:43.486176   64999 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-13 08:39:43.475122889 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 08:39:43.486331   64999 docker.go:319] overlay module found
	I1213 08:39:43.488529   64999 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1213 08:39:43.489778   64999 start.go:309] selected driver: docker
	I1213 08:39:43.489796   64999 start.go:927] validating driver "docker" against &{Name:functional-331564 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-331564 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 08:39:43.489911   64999 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 08:39:43.491771   64999 out.go:203] 
	W1213 08:39:43.493079   64999 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 08:39:43.494405   64999 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (8.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-331564 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-331564 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-56j8p" [402f8af3-bb48-41ad-ad4a-73e30d9871f4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-56j8p" [402f8af3-bb48-41ad-ad4a-73e30d9871f4] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004202366s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31008
functional_test.go:1680: http://192.168.49.2:31008: success! body:
Request served by hello-node-connect-9f67c86d4-56j8p

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31008
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (8.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (21.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [4c3077ce-e411-48ed-ac90-414ff9c74a01] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005011339s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-331564 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-331564 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-331564 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-331564 apply -f testdata/storage-provisioner/pod.yaml
I1213 08:39:36.906588    9303 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [152ea86a-3cfd-4c4f-8649-0d07c02340c9] Pending
helpers_test.go:353: "sp-pod" [152ea86a-3cfd-4c4f-8649-0d07c02340c9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [152ea86a-3cfd-4c4f-8649-0d07c02340c9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00371846s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-331564 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-331564 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-331564 apply -f testdata/storage-provisioner/pod.yaml
I1213 08:39:45.794572    9303 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [0af76fc3-aade-4c8c-a024-f1487bb5eb8a] Pending
helpers_test.go:353: "sp-pod" [0af76fc3-aade-4c8c-a024-f1487bb5eb8a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003983381s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-331564 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (21.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh -n functional-331564 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 cp functional-331564:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp844518036/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh -n functional-331564 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh -n functional-331564 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (21.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-331564 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-872rk" [58e0c17a-9460-4dcc-a6aa-6f08f98acd2e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-872rk" [58e0c17a-9460-4dcc-a6aa-6f08f98acd2e] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 14.003934599s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-331564 exec mysql-7d7b65bc95-872rk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-331564 exec mysql-7d7b65bc95-872rk -- mysql -ppassword -e "show databases;": exit status 1 (110.339537ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:39:42.373970    9303 retry.go:31] will retry after 731.367526ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-331564 exec mysql-7d7b65bc95-872rk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-331564 exec mysql-7d7b65bc95-872rk -- mysql -ppassword -e "show databases;": exit status 1 (133.860691ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:39:43.239953    9303 retry.go:31] will retry after 2.123015213s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-331564 exec mysql-7d7b65bc95-872rk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-331564 exec mysql-7d7b65bc95-872rk -- mysql -ppassword -e "show databases;": exit status 1 (89.026396ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:39:45.452773    9303 retry.go:31] will retry after 1.188421736s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-331564 exec mysql-7d7b65bc95-872rk -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-331564 exec mysql-7d7b65bc95-872rk -- mysql -ppassword -e "show databases;": exit status 1 (103.654389ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1213 08:39:46.745837    9303 retry.go:31] will retry after 3.175525146s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-331564 exec mysql-7d7b65bc95-872rk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (21.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9303/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "sudo cat /etc/test/nested/copy/9303/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9303.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "sudo cat /etc/ssl/certs/9303.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9303.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "sudo cat /usr/share/ca-certificates/9303.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/93032.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "sudo cat /etc/ssl/certs/93032.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/93032.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "sudo cat /usr/share/ca-certificates/93032.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-331564 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-331564 ssh "sudo systemctl is-active docker": exit status 1 (320.782661ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-331564 ssh "sudo systemctl is-active containerd": exit status 1 (337.087597ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-331564 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-331564
localhost/kicbase/echo-server:functional-331564
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-331564 image ls --format short --alsologtostderr:
I1213 08:39:53.442646   68432 out.go:360] Setting OutFile to fd 1 ...
I1213 08:39:53.442976   68432 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:53.442987   68432 out.go:374] Setting ErrFile to fd 2...
I1213 08:39:53.442993   68432 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:53.443221   68432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
I1213 08:39:53.443828   68432 config.go:182] Loaded profile config "functional-331564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:39:53.443972   68432 config.go:182] Loaded profile config "functional-331564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:39:53.444611   68432 cli_runner.go:164] Run: docker container inspect functional-331564 --format={{.State.Status}}
I1213 08:39:53.465409   68432 ssh_runner.go:195] Run: systemctl --version
I1213 08:39:53.465454   68432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331564
I1213 08:39:53.486862   68432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/functional-331564/id_rsa Username:docker}
I1213 08:39:53.592143   68432 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-331564 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-331564  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ localhost/minikube-local-cache-test     │ functional-331564  │ 4f36974bbeab2 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-331564 image ls --format table --alsologtostderr:
I1213 08:39:54.956422   68806 out.go:360] Setting OutFile to fd 1 ...
I1213 08:39:54.956533   68806 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:54.956540   68806 out.go:374] Setting ErrFile to fd 2...
I1213 08:39:54.956546   68806 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:54.956753   68806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
I1213 08:39:54.957342   68806 config.go:182] Loaded profile config "functional-331564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:39:54.957430   68806 config.go:182] Loaded profile config "functional-331564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:39:54.957872   68806 cli_runner.go:164] Run: docker container inspect functional-331564 --format={{.State.Status}}
I1213 08:39:54.979770   68806 ssh_runner.go:195] Run: systemctl --version
I1213 08:39:54.979822   68806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331564
I1213 08:39:55.000191   68806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/functional-331564/id_rsa Username:docker}
I1213 08:39:55.096044   68806 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image ls --format json --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-331564 image ls --format json --alsologtostderr: (1.232105029s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-331564 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner
@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843
f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-d
f8de77b"],"size":"109379124"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794
573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"4f36974bbeab27303a84e8f03c1cba8358894ef8c3a66106d07fcf1878fc1b2d","repoDigests":["localhost/minikube-local-cache-test@sha256:6630d8238509ed25a5453990fee335a0860a804abf643aab8c291c119ca966ef"],"repoTags":["localhost/minikube-local-cache-test:functional-331564"],"size":"3330"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae
05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io
/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-331564"],"size":"4943877"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-331564 image ls --format json --alsologtostderr:
I1213 08:39:53.736675   68544 out.go:360] Setting OutFile to fd 1 ...
I1213 08:39:53.736786   68544 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:53.736796   68544 out.go:374] Setting ErrFile to fd 2...
I1213 08:39:53.736802   68544 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:53.737114   68544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
I1213 08:39:53.737951   68544 config.go:182] Loaded profile config "functional-331564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:39:53.738090   68544 config.go:182] Loaded profile config "functional-331564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:39:53.738750   68544 cli_runner.go:164] Run: docker container inspect functional-331564 --format={{.State.Status}}
I1213 08:39:53.759224   68544 ssh_runner.go:195] Run: systemctl --version
I1213 08:39:53.759380   68544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331564
I1213 08:39:53.788075   68544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/functional-331564/id_rsa Username:docker}
I1213 08:39:53.895350   68544 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-331564 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 4f36974bbeab27303a84e8f03c1cba8358894ef8c3a66106d07fcf1878fc1b2d
repoDigests:
- localhost/minikube-local-cache-test@sha256:6630d8238509ed25a5453990fee335a0860a804abf643aab8c291c119ca966ef
repoTags:
- localhost/minikube-local-cache-test:functional-331564
size: "3330"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-331564
size: "4943877"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-331564 image ls --format yaml --alsologtostderr:
I1213 08:39:53.460501   68438 out.go:360] Setting OutFile to fd 1 ...
I1213 08:39:53.460803   68438 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:53.460814   68438 out.go:374] Setting ErrFile to fd 2...
I1213 08:39:53.460821   68438 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:53.461101   68438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
I1213 08:39:53.461857   68438 config.go:182] Loaded profile config "functional-331564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:39:53.461988   68438 config.go:182] Loaded profile config "functional-331564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:39:53.462617   68438 cli_runner.go:164] Run: docker container inspect functional-331564 --format={{.State.Status}}
I1213 08:39:53.484738   68438 ssh_runner.go:195] Run: systemctl --version
I1213 08:39:53.484804   68438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331564
I1213 08:39:53.509422   68438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/functional-331564/id_rsa Username:docker}
I1213 08:39:53.615058   68438 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-331564 ssh pgrep buildkitd: exit status 1 (324.310662ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image build -t localhost/my-image:functional-331564 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-331564 image build -t localhost/my-image:functional-331564 testdata/build --alsologtostderr: (2.790832197s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-331564 image build -t localhost/my-image:functional-331564 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8e1b3389711
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-331564
--> 0c135399dba
Successfully tagged localhost/my-image:functional-331564
0c135399dba3666d490a44b48eca8598d48dfe0cb462662b18edc87fad59bba4
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-331564 image build -t localhost/my-image:functional-331564 testdata/build --alsologtostderr:
I1213 08:39:54.038661   68702 out.go:360] Setting OutFile to fd 1 ...
I1213 08:39:54.038902   68702 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:54.038911   68702 out.go:374] Setting ErrFile to fd 2...
I1213 08:39:54.038914   68702 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:54.039126   68702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
I1213 08:39:54.039706   68702 config.go:182] Loaded profile config "functional-331564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:39:54.040345   68702 config.go:182] Loaded profile config "functional-331564": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1213 08:39:54.040760   68702 cli_runner.go:164] Run: docker container inspect functional-331564 --format={{.State.Status}}
I1213 08:39:54.064703   68702 ssh_runner.go:195] Run: systemctl --version
I1213 08:39:54.064771   68702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-331564
I1213 08:39:54.089048   68702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/functional-331564/id_rsa Username:docker}
I1213 08:39:54.197509   68702 build_images.go:162] Building image from path: /tmp/build.2556568947.tar
I1213 08:39:54.197575   68702 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 08:39:54.207930   68702 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2556568947.tar
I1213 08:39:54.212238   68702 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2556568947.tar: stat -c "%s %y" /var/lib/minikube/build/build.2556568947.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2556568947.tar': No such file or directory
I1213 08:39:54.212274   68702 ssh_runner.go:362] scp /tmp/build.2556568947.tar --> /var/lib/minikube/build/build.2556568947.tar (3072 bytes)
I1213 08:39:54.234937   68702 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2556568947
I1213 08:39:54.245454   68702 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2556568947 -xf /var/lib/minikube/build/build.2556568947.tar
I1213 08:39:54.256657   68702 crio.go:315] Building image: /var/lib/minikube/build/build.2556568947
I1213 08:39:54.256733   68702 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-331564 /var/lib/minikube/build/build.2556568947 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1213 08:39:56.734264   68702 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-331564 /var/lib/minikube/build/build.2556568947 --cgroup-manager=cgroupfs: (2.477507042s)
I1213 08:39:56.734329   68702 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2556568947
I1213 08:39:56.743017   68702 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2556568947.tar
I1213 08:39:56.750705   68702 build_images.go:218] Built localhost/my-image:functional-331564 from /tmp/build.2556568947.tar
I1213 08:39:56.750734   68702 build_images.go:134] succeeded building to: functional-331564
I1213 08:39:56.750750   68702 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-331564
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image load --daemon kicbase/echo-server:functional-331564 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-331564 image load --daemon kicbase/echo-server:functional-331564 --alsologtostderr: (1.251475971s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-331564 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-331564 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-331564 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-331564 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 62575: os: process already finished
helpers_test.go:520: unable to terminate pid 62334: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-331564 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (12.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-331564 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [00c2788a-ec54-423e-b4f7-63c613e7b3c8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [00c2788a-ec54-423e-b4f7-63c613e7b3c8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.003403665s
I1213 08:39:43.138191    9303 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (12.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-331564
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image load --daemon kicbase/echo-server:functional-331564 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-331564 image load --daemon kicbase/echo-server:functional-331564 --alsologtostderr: (1.012095018s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (1.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image save kicbase/echo-server:functional-331564 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-331564 image save kicbase/echo-server:functional-331564 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.11415004s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (1.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image rm kicbase/echo-server:functional-331564 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-331564
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 image save --daemon kicbase/echo-server:functional-331564 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-331564
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-331564 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo719679819/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765615180018637204" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo719679819/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765615180018637204" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo719679819/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765615180018637204" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo719679819/001/test-1765615180018637204
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-331564 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (275.309639ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:39:40.294256    9303 retry.go:31] will retry after 382.291002ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 08:39 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 08:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 08:39 test-1765615180018637204
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh cat /mount-9p/test-1765615180018637204
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-331564 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [34afb4a8-1dca-4eaf-8545-43fca7d83231] Pending
helpers_test.go:353: "busybox-mount" [34afb4a8-1dca-4eaf-8545-43fca7d83231] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [34afb4a8-1dca-4eaf-8545-43fca7d83231] Running
helpers_test.go:353: "busybox-mount" [34afb4a8-1dca-4eaf-8545-43fca7d83231] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [34afb4a8-1dca-4eaf-8545-43fca7d83231] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00310369s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-331564 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-331564 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo719679819/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-331564 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.230.125 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-331564 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-331564 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3546357239/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-331564 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (299.953486ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:39:47.176119    9303 retry.go:31] will retry after 689.221956ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-331564 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3546357239/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-331564 ssh "sudo umount -f /mount-9p": exit status 1 (260.421965ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-331564 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-331564 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3546357239/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-331564 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3066388592/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-331564 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3066388592/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-331564 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3066388592/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-331564 ssh "findmnt -T" /mount1: exit status 1 (325.430114ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 08:39:49.197310    9303 retry.go:31] will retry after 392.378388ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-331564 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-331564 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3066388592/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-331564 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3066388592/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-331564 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3066388592/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-331564 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-331564 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-z4qj7" [fe6cb9bd-b4c1-49c5-b1c3-60af312a493e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-z4qj7" [fe6cb9bd-b4c1-49c5-b1c3-60af312a493e] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004088339s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (8.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 version -o=json --components
2025/12/13 08:39:56 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "419.017481ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "81.757562ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "420.808634ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "86.467509ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-331564 service list: (1.699533089s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-331564 service list -o json: (1.704653893s)
functional_test.go:1504: Took "1.704757269s" to run "out/minikube-linux-amd64 -p functional-331564 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30177
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-331564 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30177
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-331564
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-331564
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-331564
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (145.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1213 08:41:03.548380    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:41:31.252738    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:41:50.684040    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:41:50.690669    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:41:50.702066    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:41:50.723950    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:41:50.765730    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:41:50.847129    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:41:51.008714    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:41:51.330946    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:41:51.972241    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:41:53.253950    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:41:55.815664    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:42:00.938105    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:42:11.180281    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-608454 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m24.783898133s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 status --alsologtostderr -v 5
E1213 08:42:31.662269    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/StartCluster (145.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-608454 kubectl -- rollout status deployment/busybox: (3.87531773s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- exec busybox-7b57f96db7-5f4x2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- exec busybox-7b57f96db7-pxw79 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- exec busybox-7b57f96db7-rg98b -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- exec busybox-7b57f96db7-5f4x2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- exec busybox-7b57f96db7-pxw79 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- exec busybox-7b57f96db7-rg98b -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- exec busybox-7b57f96db7-5f4x2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- exec busybox-7b57f96db7-pxw79 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- exec busybox-7b57f96db7-rg98b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- exec busybox-7b57f96db7-5f4x2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- exec busybox-7b57f96db7-5f4x2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- exec busybox-7b57f96db7-pxw79 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- exec busybox-7b57f96db7-pxw79 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- exec busybox-7b57f96db7-rg98b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 kubectl -- exec busybox-7b57f96db7-rg98b -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (26.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-608454 node add --alsologtostderr -v 5: (26.109432142s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (26.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-608454 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp testdata/cp-test.txt ha-608454:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp ha-608454:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1149209100/001/cp-test_ha-608454.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp ha-608454:/home/docker/cp-test.txt ha-608454-m02:/home/docker/cp-test_ha-608454_ha-608454-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m02 "sudo cat /home/docker/cp-test_ha-608454_ha-608454-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp ha-608454:/home/docker/cp-test.txt ha-608454-m03:/home/docker/cp-test_ha-608454_ha-608454-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m03 "sudo cat /home/docker/cp-test_ha-608454_ha-608454-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp ha-608454:/home/docker/cp-test.txt ha-608454-m04:/home/docker/cp-test_ha-608454_ha-608454-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m04 "sudo cat /home/docker/cp-test_ha-608454_ha-608454-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp testdata/cp-test.txt ha-608454-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp ha-608454-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1149209100/001/cp-test_ha-608454-m02.txt
E1213 08:43:12.623547    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp ha-608454-m02:/home/docker/cp-test.txt ha-608454:/home/docker/cp-test_ha-608454-m02_ha-608454.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454 "sudo cat /home/docker/cp-test_ha-608454-m02_ha-608454.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp ha-608454-m02:/home/docker/cp-test.txt ha-608454-m03:/home/docker/cp-test_ha-608454-m02_ha-608454-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m03 "sudo cat /home/docker/cp-test_ha-608454-m02_ha-608454-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp ha-608454-m02:/home/docker/cp-test.txt ha-608454-m04:/home/docker/cp-test_ha-608454-m02_ha-608454-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m04 "sudo cat /home/docker/cp-test_ha-608454-m02_ha-608454-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp testdata/cp-test.txt ha-608454-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp ha-608454-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1149209100/001/cp-test_ha-608454-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp ha-608454-m03:/home/docker/cp-test.txt ha-608454:/home/docker/cp-test_ha-608454-m03_ha-608454.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454 "sudo cat /home/docker/cp-test_ha-608454-m03_ha-608454.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp ha-608454-m03:/home/docker/cp-test.txt ha-608454-m02:/home/docker/cp-test_ha-608454-m03_ha-608454-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m02 "sudo cat /home/docker/cp-test_ha-608454-m03_ha-608454-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp ha-608454-m03:/home/docker/cp-test.txt ha-608454-m04:/home/docker/cp-test_ha-608454-m03_ha-608454-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m04 "sudo cat /home/docker/cp-test_ha-608454-m03_ha-608454-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp testdata/cp-test.txt ha-608454-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp ha-608454-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1149209100/001/cp-test_ha-608454-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp ha-608454-m04:/home/docker/cp-test.txt ha-608454:/home/docker/cp-test_ha-608454-m04_ha-608454.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454 "sudo cat /home/docker/cp-test_ha-608454-m04_ha-608454.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp ha-608454-m04:/home/docker/cp-test.txt ha-608454-m02:/home/docker/cp-test_ha-608454-m04_ha-608454-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m02 "sudo cat /home/docker/cp-test_ha-608454-m04_ha-608454-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 cp ha-608454-m04:/home/docker/cp-test.txt ha-608454-m03:/home/docker/cp-test_ha-608454-m04_ha-608454-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 ssh -n ha-608454-m03 "sudo cat /home/docker/cp-test_ha-608454-m04_ha-608454-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-608454 node stop m02 --alsologtostderr -v 5: (19.064364142s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608454 status --alsologtostderr -v 5: exit status 7 (689.086742ms)

                                                
                                                
-- stdout --
	ha-608454
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-608454-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-608454-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-608454-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:43:43.223828   89554 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:43:43.224115   89554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:43:43.224125   89554 out.go:374] Setting ErrFile to fd 2...
	I1213 08:43:43.224129   89554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:43:43.224312   89554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:43:43.224473   89554 out.go:368] Setting JSON to false
	I1213 08:43:43.224512   89554 mustload.go:66] Loading cluster: ha-608454
	I1213 08:43:43.224637   89554 notify.go:221] Checking for updates...
	I1213 08:43:43.225011   89554 config.go:182] Loaded profile config "ha-608454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:43:43.225031   89554 status.go:174] checking status of ha-608454 ...
	I1213 08:43:43.226278   89554 cli_runner.go:164] Run: docker container inspect ha-608454 --format={{.State.Status}}
	I1213 08:43:43.245906   89554 status.go:371] ha-608454 host status = "Running" (err=<nil>)
	I1213 08:43:43.245953   89554 host.go:66] Checking if "ha-608454" exists ...
	I1213 08:43:43.246356   89554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608454
	I1213 08:43:43.264432   89554 host.go:66] Checking if "ha-608454" exists ...
	I1213 08:43:43.264683   89554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:43:43.264722   89554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608454
	I1213 08:43:43.282007   89554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/ha-608454/id_rsa Username:docker}
	I1213 08:43:43.375999   89554 ssh_runner.go:195] Run: systemctl --version
	I1213 08:43:43.382596   89554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:43:43.394463   89554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:43:43.451614   89554 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-13 08:43:43.441443028 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 08:43:43.452318   89554 kubeconfig.go:125] found "ha-608454" server: "https://192.168.49.254:8443"
	I1213 08:43:43.452352   89554 api_server.go:166] Checking apiserver status ...
	I1213 08:43:43.452410   89554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:43:43.464942   89554 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1264/cgroup
	W1213 08:43:43.473107   89554 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1264/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:43:43.473150   89554 ssh_runner.go:195] Run: ls
	I1213 08:43:43.476578   89554 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 08:43:43.482105   89554 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 08:43:43.482128   89554 status.go:463] ha-608454 apiserver status = Running (err=<nil>)
	I1213 08:43:43.482151   89554 status.go:176] ha-608454 status: &{Name:ha-608454 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 08:43:43.482173   89554 status.go:174] checking status of ha-608454-m02 ...
	I1213 08:43:43.482426   89554 cli_runner.go:164] Run: docker container inspect ha-608454-m02 --format={{.State.Status}}
	I1213 08:43:43.501150   89554 status.go:371] ha-608454-m02 host status = "Stopped" (err=<nil>)
	I1213 08:43:43.501177   89554 status.go:384] host is not running, skipping remaining checks
	I1213 08:43:43.501184   89554 status.go:176] ha-608454-m02 status: &{Name:ha-608454-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 08:43:43.501202   89554 status.go:174] checking status of ha-608454-m03 ...
	I1213 08:43:43.501460   89554 cli_runner.go:164] Run: docker container inspect ha-608454-m03 --format={{.State.Status}}
	I1213 08:43:43.519572   89554 status.go:371] ha-608454-m03 host status = "Running" (err=<nil>)
	I1213 08:43:43.519593   89554 host.go:66] Checking if "ha-608454-m03" exists ...
	I1213 08:43:43.519820   89554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608454-m03
	I1213 08:43:43.538860   89554 host.go:66] Checking if "ha-608454-m03" exists ...
	I1213 08:43:43.539113   89554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:43:43.539155   89554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608454-m03
	I1213 08:43:43.557242   89554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/ha-608454-m03/id_rsa Username:docker}
	I1213 08:43:43.649927   89554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:43:43.663114   89554 kubeconfig.go:125] found "ha-608454" server: "https://192.168.49.254:8443"
	I1213 08:43:43.663139   89554 api_server.go:166] Checking apiserver status ...
	I1213 08:43:43.663174   89554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:43:43.673518   89554 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup
	W1213 08:43:43.681535   89554 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1184/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:43:43.681585   89554 ssh_runner.go:195] Run: ls
	I1213 08:43:43.685062   89554 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 08:43:43.689139   89554 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 08:43:43.689159   89554 status.go:463] ha-608454-m03 apiserver status = Running (err=<nil>)
	I1213 08:43:43.689167   89554 status.go:176] ha-608454-m03 status: &{Name:ha-608454-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 08:43:43.689181   89554 status.go:174] checking status of ha-608454-m04 ...
	I1213 08:43:43.689447   89554 cli_runner.go:164] Run: docker container inspect ha-608454-m04 --format={{.State.Status}}
	I1213 08:43:43.707806   89554 status.go:371] ha-608454-m04 host status = "Running" (err=<nil>)
	I1213 08:43:43.707831   89554 host.go:66] Checking if "ha-608454-m04" exists ...
	I1213 08:43:43.708099   89554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608454-m04
	I1213 08:43:43.726699   89554 host.go:66] Checking if "ha-608454-m04" exists ...
	I1213 08:43:43.727027   89554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:43:43.727064   89554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608454-m04
	I1213 08:43:43.746095   89554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/ha-608454-m04/id_rsa Username:docker}
	I1213 08:43:43.839118   89554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:43:43.851866   89554 status.go:176] ha-608454-m04 status: &{Name:ha-608454-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-608454 node start m02 --alsologtostderr -v 5: (7.62778175s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 stop --alsologtostderr -v 5
E1213 08:44:28.260293    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:28.266652    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:28.277950    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:28.299305    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:28.340661    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:28.422120    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:28.583586    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:28.905236    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:29.547328    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:30.829111    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:33.391326    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:34.545004    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:38.513548    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-608454 stop --alsologtostderr -v 5: (48.25082697s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 start --wait true --alsologtostderr -v 5
E1213 08:44:48.755633    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:45:09.237379    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-608454 start --wait true --alsologtostderr -v 5: (57.969931765s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-608454 node delete m03 --alsologtostderr -v 5: (9.726163035s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 status --alsologtostderr -v 5
E1213 08:45:50.199021    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (48.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 stop --alsologtostderr -v 5
E1213 08:46:03.549704    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-608454 stop --alsologtostderr -v 5: (48.406359156s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608454 status --alsologtostderr -v 5: exit status 7 (116.557191ms)

                                                
                                                
-- stdout --
	ha-608454
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-608454-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-608454-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:46:40.061821  103874 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:46:40.062205  103874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:46:40.062221  103874 out.go:374] Setting ErrFile to fd 2...
	I1213 08:46:40.062228  103874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:46:40.062766  103874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:46:40.063162  103874 out.go:368] Setting JSON to false
	I1213 08:46:40.063205  103874 mustload.go:66] Loading cluster: ha-608454
	I1213 08:46:40.063293  103874 notify.go:221] Checking for updates...
	I1213 08:46:40.063855  103874 config.go:182] Loaded profile config "ha-608454": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:46:40.063876  103874 status.go:174] checking status of ha-608454 ...
	I1213 08:46:40.064337  103874 cli_runner.go:164] Run: docker container inspect ha-608454 --format={{.State.Status}}
	I1213 08:46:40.083479  103874 status.go:371] ha-608454 host status = "Stopped" (err=<nil>)
	I1213 08:46:40.083511  103874 status.go:384] host is not running, skipping remaining checks
	I1213 08:46:40.083519  103874 status.go:176] ha-608454 status: &{Name:ha-608454 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 08:46:40.083544  103874 status.go:174] checking status of ha-608454-m02 ...
	I1213 08:46:40.083770  103874 cli_runner.go:164] Run: docker container inspect ha-608454-m02 --format={{.State.Status}}
	I1213 08:46:40.101223  103874 status.go:371] ha-608454-m02 host status = "Stopped" (err=<nil>)
	I1213 08:46:40.101245  103874 status.go:384] host is not running, skipping remaining checks
	I1213 08:46:40.101253  103874 status.go:176] ha-608454-m02 status: &{Name:ha-608454-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 08:46:40.101272  103874 status.go:174] checking status of ha-608454-m04 ...
	I1213 08:46:40.101566  103874 cli_runner.go:164] Run: docker container inspect ha-608454-m04 --format={{.State.Status}}
	I1213 08:46:40.118338  103874 status.go:371] ha-608454-m04 host status = "Stopped" (err=<nil>)
	I1213 08:46:40.118359  103874 status.go:384] host is not running, skipping remaining checks
	I1213 08:46:40.118389  103874 status.go:176] ha-608454-m04 status: &{Name:ha-608454-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (48.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (57.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1213 08:46:50.686248    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:47:12.121159    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:47:18.387239    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-608454 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (56.366146104s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (57.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (85.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-608454 node add --control-plane --alsologtostderr -v 5: (1m24.547812848s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-608454 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (85.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.18s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-095899 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1213 08:49:28.260666    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-095899 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (38.18197114s)
--- PASS: TestJSONOutput/start/Command (38.18s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.99s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-095899 --output=json --user=testUser
E1213 08:49:55.963277    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-095899 --output=json --user=testUser: (7.990408895s)
--- PASS: TestJSONOutput/stop/Command (7.99s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-891097 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-891097 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (78.139473ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"85ae663b-7b4a-47db-99fe-df1cd47ab0d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-891097] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c144b15-3956-40a0-b809-d884e342ffbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22128"}}
	{"specversion":"1.0","id":"72432529-382d-4594-8200-349054415cc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4c206b8a-a9e4-48e9-8ace-cc6490117ff5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig"}}
	{"specversion":"1.0","id":"edd73e5a-7abd-414c-842e-7b774ce583cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube"}}
	{"specversion":"1.0","id":"74d77c31-16e1-4ee5-980d-e649cabe2b8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e72ea82d-02f2-4758-bc1b-7c30a23aafd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"71122d49-824c-4b8d-b2d6-e42dabc7173a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-891097" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-891097
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-192630 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-192630 --network=: (23.761498317s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-192630" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-192630
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-192630: (2.154381156s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.93s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.19s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-200465 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-200465 --network=bridge: (21.1804647s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-200465" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-200465
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-200465: (1.990618607s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.19s)

                                                
                                    
x
+
TestKicExistingNetwork (25.43s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1213 08:50:56.864606    9303 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1213 08:50:56.882708    9303 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1213 08:50:56.882775    9303 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1213 08:50:56.882791    9303 cli_runner.go:164] Run: docker network inspect existing-network
W1213 08:50:56.899384    9303 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1213 08:50:56.899414    9303 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1213 08:50:56.899441    9303 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1213 08:50:56.899573    9303 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 08:50:56.916702    9303 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b9f57735373a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:3a:37:6d:21:84} reservation:<nil>}
I1213 08:50:56.917066    9303 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f530c0}
I1213 08:50:56.917096    9303 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1213 08:50:56.917141    9303 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1213 08:50:56.962837    9303 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-863354 --network=existing-network
E1213 08:51:03.549010    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-863354 --network=existing-network: (23.29494472s)
helpers_test.go:176: Cleaning up "existing-network-863354" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-863354
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-863354: (1.999128245s)
I1213 08:51:22.275315    9303 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.43s)

                                                
                                    
x
+
TestKicCustomSubnet (25.83s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-217564 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-217564 --subnet=192.168.60.0/24: (23.686766933s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-217564 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-217564" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-217564
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-217564: (2.126843986s)
--- PASS: TestKicCustomSubnet (25.83s)

                                                
                                    
x
+
TestKicStaticIP (23.16s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-460959 --static-ip=192.168.200.200
E1213 08:51:50.685810    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-460959 --static-ip=192.168.200.200: (20.88660993s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-460959 ip
helpers_test.go:176: Cleaning up "static-ip-460959" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-460959
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-460959: (2.127008295s)
--- PASS: TestKicStaticIP (23.16s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (44.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-611386 --driver=docker  --container-runtime=crio
E1213 08:52:26.616384    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-611386 --driver=docker  --container-runtime=crio: (18.782911775s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-613437 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-613437 --driver=docker  --container-runtime=crio: (19.341940398s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-611386
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-613437
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-613437" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-613437
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-613437: (2.320378687s)
helpers_test.go:176: Cleaning up "first-611386" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-611386
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-611386: (2.365246193s)
--- PASS: TestMinikubeProfile (44.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-546816 --memory=3072 --mount-string /tmp/TestMountStartserial589620889/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-546816 --memory=3072 --mount-string /tmp/TestMountStartserial589620889/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.948395055s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-546816 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-557766 --memory=3072 --mount-string /tmp/TestMountStartserial589620889/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-557766 --memory=3072 --mount-string /tmp/TestMountStartserial589620889/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.699562432s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-557766 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-546816 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-546816 --alsologtostderr -v=5: (1.668241576s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-557766 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-557766
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-557766: (1.261469715s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.12s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-557766
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-557766: (6.118419821s)
--- PASS: TestMountStart/serial/RestartStopped (7.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-557766 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (92.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-183942 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1213 08:54:28.260232    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-183942 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m32.213916641s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (92.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183942 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183942 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-183942 -- rollout status deployment/busybox: (2.432225867s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183942 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183942 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183942 -- exec busybox-7b57f96db7-2m56f -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183942 -- exec busybox-7b57f96db7-w9w6z -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183942 -- exec busybox-7b57f96db7-2m56f -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183942 -- exec busybox-7b57f96db7-w9w6z -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183942 -- exec busybox-7b57f96db7-2m56f -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183942 -- exec busybox-7b57f96db7-w9w6z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.87s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183942 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183942 -- exec busybox-7b57f96db7-2m56f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183942 -- exec busybox-7b57f96db7-2m56f -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183942 -- exec busybox-7b57f96db7-w9w6z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-183942 -- exec busybox-7b57f96db7-w9w6z -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (22.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-183942 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-183942 -v=5 --alsologtostderr: (21.809829555s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (22.46s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-183942 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 cp testdata/cp-test.txt multinode-183942:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 cp multinode-183942:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3352432662/001/cp-test_multinode-183942.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 cp multinode-183942:/home/docker/cp-test.txt multinode-183942-m02:/home/docker/cp-test_multinode-183942_multinode-183942-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942-m02 "sudo cat /home/docker/cp-test_multinode-183942_multinode-183942-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 cp multinode-183942:/home/docker/cp-test.txt multinode-183942-m03:/home/docker/cp-test_multinode-183942_multinode-183942-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942-m03 "sudo cat /home/docker/cp-test_multinode-183942_multinode-183942-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 cp testdata/cp-test.txt multinode-183942-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 cp multinode-183942-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3352432662/001/cp-test_multinode-183942-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 cp multinode-183942-m02:/home/docker/cp-test.txt multinode-183942:/home/docker/cp-test_multinode-183942-m02_multinode-183942.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942 "sudo cat /home/docker/cp-test_multinode-183942-m02_multinode-183942.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 cp multinode-183942-m02:/home/docker/cp-test.txt multinode-183942-m03:/home/docker/cp-test_multinode-183942-m02_multinode-183942-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942-m03 "sudo cat /home/docker/cp-test_multinode-183942-m02_multinode-183942-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 cp testdata/cp-test.txt multinode-183942-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 cp multinode-183942-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3352432662/001/cp-test_multinode-183942-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 cp multinode-183942-m03:/home/docker/cp-test.txt multinode-183942:/home/docker/cp-test_multinode-183942-m03_multinode-183942.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942 "sudo cat /home/docker/cp-test_multinode-183942-m03_multinode-183942.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 cp multinode-183942-m03:/home/docker/cp-test.txt multinode-183942-m02:/home/docker/cp-test_multinode-183942-m03_multinode-183942-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 ssh -n multinode-183942-m02 "sudo cat /home/docker/cp-test_multinode-183942-m03_multinode-183942-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-183942 node stop m03: (1.278033159s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-183942 status: exit status 7 (496.666172ms)

                                                
                                                
-- stdout --
	multinode-183942
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-183942-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-183942-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-183942 status --alsologtostderr: exit status 7 (487.592287ms)

                                                
                                                
-- stdout --
	multinode-183942
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-183942-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-183942-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:55:33.013682  163859 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:55:33.013770  163859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:55:33.013774  163859 out.go:374] Setting ErrFile to fd 2...
	I1213 08:55:33.013778  163859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:55:33.013964  163859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:55:33.014118  163859 out.go:368] Setting JSON to false
	I1213 08:55:33.014140  163859 mustload.go:66] Loading cluster: multinode-183942
	I1213 08:55:33.014213  163859 notify.go:221] Checking for updates...
	I1213 08:55:33.014546  163859 config.go:182] Loaded profile config "multinode-183942": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:55:33.014562  163859 status.go:174] checking status of multinode-183942 ...
	I1213 08:55:33.015000  163859 cli_runner.go:164] Run: docker container inspect multinode-183942 --format={{.State.Status}}
	I1213 08:55:33.033335  163859 status.go:371] multinode-183942 host status = "Running" (err=<nil>)
	I1213 08:55:33.033363  163859 host.go:66] Checking if "multinode-183942" exists ...
	I1213 08:55:33.033632  163859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-183942
	I1213 08:55:33.051212  163859 host.go:66] Checking if "multinode-183942" exists ...
	I1213 08:55:33.051469  163859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:55:33.051550  163859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-183942
	I1213 08:55:33.069041  163859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/multinode-183942/id_rsa Username:docker}
	I1213 08:55:33.163127  163859 ssh_runner.go:195] Run: systemctl --version
	I1213 08:55:33.169530  163859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:55:33.182331  163859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 08:55:33.235629  163859 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-13 08:55:33.22613746 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 08:55:33.236211  163859 kubeconfig.go:125] found "multinode-183942" server: "https://192.168.67.2:8443"
	I1213 08:55:33.236247  163859 api_server.go:166] Checking apiserver status ...
	I1213 08:55:33.236289  163859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 08:55:33.247844  163859 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup
	W1213 08:55:33.256251  163859 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1213 08:55:33.256326  163859 ssh_runner.go:195] Run: ls
	I1213 08:55:33.259946  163859 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1213 08:55:33.264170  163859 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1213 08:55:33.264192  163859 status.go:463] multinode-183942 apiserver status = Running (err=<nil>)
	I1213 08:55:33.264202  163859 status.go:176] multinode-183942 status: &{Name:multinode-183942 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 08:55:33.264220  163859 status.go:174] checking status of multinode-183942-m02 ...
	I1213 08:55:33.264464  163859 cli_runner.go:164] Run: docker container inspect multinode-183942-m02 --format={{.State.Status}}
	I1213 08:55:33.281886  163859 status.go:371] multinode-183942-m02 host status = "Running" (err=<nil>)
	I1213 08:55:33.281909  163859 host.go:66] Checking if "multinode-183942-m02" exists ...
	I1213 08:55:33.282161  163859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-183942-m02
	I1213 08:55:33.298804  163859 host.go:66] Checking if "multinode-183942-m02" exists ...
	I1213 08:55:33.299077  163859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 08:55:33.299112  163859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-183942-m02
	I1213 08:55:33.317665  163859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22128-5776/.minikube/machines/multinode-183942-m02/id_rsa Username:docker}
	I1213 08:55:33.410704  163859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 08:55:33.422682  163859 status.go:176] multinode-183942-m02 status: &{Name:multinode-183942-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 08:55:33.422721  163859 status.go:174] checking status of multinode-183942-m03 ...
	I1213 08:55:33.422981  163859 cli_runner.go:164] Run: docker container inspect multinode-183942-m03 --format={{.State.Status}}
	I1213 08:55:33.440354  163859 status.go:371] multinode-183942-m03 host status = "Stopped" (err=<nil>)
	I1213 08:55:33.440374  163859 status.go:384] host is not running, skipping remaining checks
	I1213 08:55:33.440379  163859 status.go:176] multinode-183942-m03 status: &{Name:multinode-183942-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-183942 node start m03 -v=5 --alsologtostderr: (6.611258259s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-183942
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-183942
E1213 08:56:03.549122    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-183942: (29.579606665s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-183942 --wait=true -v=5 --alsologtostderr
E1213 08:56:50.684766    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-183942 --wait=true -v=5 --alsologtostderr: (51.947084543s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-183942
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-183942 node delete m03: (4.623595734s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-183942 stop: (30.185607069s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-183942 status: exit status 7 (98.272537ms)

                                                
                                                
-- stdout --
	multinode-183942
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-183942-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-183942 status --alsologtostderr: exit status 7 (104.555749ms)

                                                
                                                
-- stdout --
	multinode-183942
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-183942-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 08:57:37.952779  173660 out.go:360] Setting OutFile to fd 1 ...
	I1213 08:57:37.953020  173660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:57:37.953029  173660 out.go:374] Setting ErrFile to fd 2...
	I1213 08:57:37.953034  173660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 08:57:37.953225  173660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 08:57:37.953384  173660 out.go:368] Setting JSON to false
	I1213 08:57:37.953408  173660 mustload.go:66] Loading cluster: multinode-183942
	I1213 08:57:37.953549  173660 notify.go:221] Checking for updates...
	I1213 08:57:37.953793  173660 config.go:182] Loaded profile config "multinode-183942": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 08:57:37.953809  173660 status.go:174] checking status of multinode-183942 ...
	I1213 08:57:37.954209  173660 cli_runner.go:164] Run: docker container inspect multinode-183942 --format={{.State.Status}}
	I1213 08:57:37.973598  173660 status.go:371] multinode-183942 host status = "Stopped" (err=<nil>)
	I1213 08:57:37.973623  173660 status.go:384] host is not running, skipping remaining checks
	I1213 08:57:37.973629  173660 status.go:176] multinode-183942 status: &{Name:multinode-183942 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 08:57:37.973661  173660 status.go:174] checking status of multinode-183942-m02 ...
	I1213 08:57:37.973928  173660 cli_runner.go:164] Run: docker container inspect multinode-183942-m02 --format={{.State.Status}}
	I1213 08:57:37.994877  173660 status.go:371] multinode-183942-m02 host status = "Stopped" (err=<nil>)
	I1213 08:57:37.994921  173660 status.go:384] host is not running, skipping remaining checks
	I1213 08:57:37.994939  173660 status.go:176] multinode-183942-m02 status: &{Name:multinode-183942-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-183942 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1213 08:58:13.749653    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-183942 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.188503997s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-183942 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.77s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-183942
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-183942-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-183942-m02 --driver=docker  --container-runtime=crio: exit status 14 (73.478166ms)

                                                
                                                
-- stdout --
	* [multinode-183942-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-183942-m02' is duplicated with machine name 'multinode-183942-m02' in profile 'multinode-183942'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-183942-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-183942-m03 --driver=docker  --container-runtime=crio: (22.945167494s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-183942
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-183942: exit status 80 (284.987702ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-183942 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-183942-m03 already exists in multinode-183942-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-183942-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-183942-m03: (2.324235381s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.69s)

                                                
                                    
x
+
TestPreload (96.95s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-986116 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1213 08:59:28.260374    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-986116 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (42.111673895s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-986116 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-986116 image pull gcr.io/k8s-minikube/busybox: (1.489840412s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-986116
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-986116: (7.977598172s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-986116 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-986116 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (42.733613997s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-986116 image list
helpers_test.go:176: Cleaning up "test-preload-986116" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-986116
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-986116: (2.405002163s)
--- PASS: TestPreload (96.95s)

                                                
                                    
x
+
TestScheduledStopUnix (98.4s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-319284 --memory=3072 --driver=docker  --container-runtime=crio
E1213 09:00:51.325180    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-319284 --memory=3072 --driver=docker  --container-runtime=crio: (22.031485494s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-319284 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 09:00:58.688469  190763 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:00:58.688609  190763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:00:58.688618  190763 out.go:374] Setting ErrFile to fd 2...
	I1213 09:00:58.688623  190763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:00:58.688868  190763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:00:58.689137  190763 out.go:368] Setting JSON to false
	I1213 09:00:58.689238  190763 mustload.go:66] Loading cluster: scheduled-stop-319284
	I1213 09:00:58.689582  190763 config.go:182] Loaded profile config "scheduled-stop-319284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:00:58.689675  190763 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/config.json ...
	I1213 09:00:58.689880  190763 mustload.go:66] Loading cluster: scheduled-stop-319284
	I1213 09:00:58.689995  190763 config.go:182] Loaded profile config "scheduled-stop-319284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-319284 -n scheduled-stop-319284
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-319284 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 09:00:59.085011  190913 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:00:59.085283  190913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:00:59.085295  190913 out.go:374] Setting ErrFile to fd 2...
	I1213 09:00:59.085298  190913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:00:59.085472  190913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:00:59.085733  190913 out.go:368] Setting JSON to false
	I1213 09:00:59.085931  190913 daemonize_unix.go:73] killing process 190798 as it is an old scheduled stop
	I1213 09:00:59.086037  190913 mustload.go:66] Loading cluster: scheduled-stop-319284
	I1213 09:00:59.086480  190913 config.go:182] Loaded profile config "scheduled-stop-319284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:00:59.086595  190913 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/config.json ...
	I1213 09:00:59.086810  190913 mustload.go:66] Loading cluster: scheduled-stop-319284
	I1213 09:00:59.086944  190913 config.go:182] Loaded profile config "scheduled-stop-319284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1213 09:00:59.091110    9303 retry.go:31] will retry after 146.344µs: open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/pid: no such file or directory
I1213 09:00:59.092312    9303 retry.go:31] will retry after 94.143µs: open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/pid: no such file or directory
I1213 09:00:59.093449    9303 retry.go:31] will retry after 276.703µs: open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/pid: no such file or directory
I1213 09:00:59.094639    9303 retry.go:31] will retry after 277.654µs: open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/pid: no such file or directory
I1213 09:00:59.095765    9303 retry.go:31] will retry after 679.201µs: open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/pid: no such file or directory
I1213 09:00:59.096909    9303 retry.go:31] will retry after 987.79µs: open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/pid: no such file or directory
I1213 09:00:59.098033    9303 retry.go:31] will retry after 672.574µs: open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/pid: no such file or directory
I1213 09:00:59.099167    9303 retry.go:31] will retry after 1.22739ms: open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/pid: no such file or directory
I1213 09:00:59.101366    9303 retry.go:31] will retry after 1.464903ms: open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/pid: no such file or directory
I1213 09:00:59.103605    9303 retry.go:31] will retry after 2.526437ms: open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/pid: no such file or directory
I1213 09:00:59.106775    9303 retry.go:31] will retry after 4.931137ms: open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/pid: no such file or directory
I1213 09:00:59.111992    9303 retry.go:31] will retry after 11.139132ms: open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/pid: no such file or directory
I1213 09:00:59.124233    9303 retry.go:31] will retry after 12.370934ms: open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/pid: no such file or directory
I1213 09:00:59.137456    9303 retry.go:31] will retry after 24.491343ms: open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/pid: no such file or directory
I1213 09:00:59.162734    9303 retry.go:31] will retry after 28.430138ms: open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/pid: no such file or directory
I1213 09:00:59.191983    9303 retry.go:31] will retry after 24.086228ms: open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-319284 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1213 09:01:03.548620    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-319284 -n scheduled-stop-319284
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-319284
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-319284 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1213 09:01:24.963749  191468 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:01:24.964018  191468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:01:24.964027  191468 out.go:374] Setting ErrFile to fd 2...
	I1213 09:01:24.964032  191468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:01:24.964224  191468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:01:24.964451  191468 out.go:368] Setting JSON to false
	I1213 09:01:24.964549  191468 mustload.go:66] Loading cluster: scheduled-stop-319284
	I1213 09:01:24.964860  191468 config.go:182] Loaded profile config "scheduled-stop-319284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:01:24.964932  191468 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/scheduled-stop-319284/config.json ...
	I1213 09:01:24.965112  191468 mustload.go:66] Loading cluster: scheduled-stop-319284
	I1213 09:01:24.965220  191468 config.go:182] Loaded profile config "scheduled-stop-319284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1213 09:01:50.686250    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-319284
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-319284: exit status 7 (81.008524ms)

                                                
                                                
-- stdout --
	scheduled-stop-319284
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-319284 -n scheduled-stop-319284
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-319284 -n scheduled-stop-319284: exit status 7 (82.297799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-319284" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-319284
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-319284: (4.864335844s)
--- PASS: TestScheduledStopUnix (98.40s)

                                                
                                    
x
+
TestInsufficientStorage (11.75s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-469767 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-469767 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.298187521s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"93dba7a1-56cd-4ff9-90da-7e240e2a7355","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-469767] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f5c9fc1a-2e0b-4be5-a44c-162c4783aa0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22128"}}
	{"specversion":"1.0","id":"bf511733-f952-49aa-8fed-b7fbe36ab738","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"99f1262e-d590-40b2-bb34-426bb21b9b19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig"}}
	{"specversion":"1.0","id":"8e9ba574-e1d9-4383-ad33-1fb40941cb7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube"}}
	{"specversion":"1.0","id":"adbd2815-a4a7-4cf5-a584-da439f02f125","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e78c7d6c-a071-4642-9fae-7916c697914e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e84702ab-7aff-4c7a-a356-c0c7c093f85f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"03274935-eb7a-4939-abe0-2df8d8ec9f24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9729ac2d-57b8-488b-9782-91dbf5dd9d19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2cfc609-fc66-42ad-9861-f1343220a86f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"595e2acf-a38c-4173-81b7-3a0fff0e52c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-469767\" primary control-plane node in \"insufficient-storage-469767\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a5904bd-e8ff-4759-bc19-3a82bb91dfaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765275396-22083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9023424-763c-4561-8403-c31462b3935d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"74a41cc3-93d2-4820-8862-4703ee767391","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-469767 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-469767 --output=json --layout=cluster: exit status 7 (290.555296ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-469767","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-469767","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 09:02:24.574781  193975 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-469767" does not appear in /home/jenkins/minikube-integration/22128-5776/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-469767 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-469767 --output=json --layout=cluster: exit status 7 (282.701138ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-469767","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-469767","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 09:02:24.858855  194087 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-469767" does not appear in /home/jenkins/minikube-integration/22128-5776/kubeconfig
	E1213 09:02:24.868943  194087 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/insufficient-storage-469767/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-469767" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-469767
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-469767: (1.881090951s)
--- PASS: TestInsufficientStorage (11.75s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1912462223 start -p running-upgrade-654480 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1912462223 start -p running-upgrade-654480 --memory=3072 --vm-driver=docker  --container-runtime=crio: (26.334434399s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-654480 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-654480 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.592394116s)
helpers_test.go:176: Cleaning up "running-upgrade-654480" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-654480
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-654480: (2.50782146s)
--- PASS: TestRunningBinaryUpgrade (52.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (390.45s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-814560 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-814560 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.084598128s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-814560
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-814560: (1.334649211s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-814560 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-814560 status --format={{.Host}}: exit status 7 (85.617346ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-814560 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1213 09:04:28.259659    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-814560 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m22.4733198s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-814560 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-814560 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-814560 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (96.132711ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-814560] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-814560
	    minikube start -p kubernetes-upgrade-814560 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8145602 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-814560 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-814560 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-814560 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m36.588701063s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-814560" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-814560
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-814560: (2.707285873s)
--- PASS: TestKubernetesUpgrade (390.45s)

                                                
                                    
x
+
TestMissingContainerUpgrade (90.9s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.902285256 start -p missing-upgrade-418180 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.902285256 start -p missing-upgrade-418180 --memory=3072 --driver=docker  --container-runtime=crio: (41.476981394s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-418180
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-418180: (1.785349776s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-418180
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-418180 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-418180 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.180319744s)
helpers_test.go:176: Cleaning up "missing-upgrade-418180" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-418180
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-418180: (4.784538831s)
--- PASS: TestMissingContainerUpgrade (90.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-442691 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-442691 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (99.4382ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-442691] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-442691 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-442691 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.846366675s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-442691 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-442691 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-442691 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (14.297337194s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-442691 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-442691 status -o json: exit status 2 (316.157804ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-442691","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-442691
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-442691: (2.017710087s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-833990 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-833990 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (165.73068ms)

                                                
                                                
-- stdout --
	* [false-833990] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22128
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 09:03:13.498394  207037 out.go:360] Setting OutFile to fd 1 ...
	I1213 09:03:13.498510  207037 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:03:13.498519  207037 out.go:374] Setting ErrFile to fd 2...
	I1213 09:03:13.498523  207037 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1213 09:03:13.498742  207037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-5776/.minikube/bin
	I1213 09:03:13.499234  207037 out.go:368] Setting JSON to false
	I1213 09:03:13.500271  207037 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2745,"bootTime":1765613848,"procs":273,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1213 09:03:13.500328  207037 start.go:143] virtualization: kvm guest
	I1213 09:03:13.502253  207037 out.go:179] * [false-833990] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1213 09:03:13.503425  207037 notify.go:221] Checking for updates...
	I1213 09:03:13.503473  207037 out.go:179]   - MINIKUBE_LOCATION=22128
	I1213 09:03:13.504883  207037 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 09:03:13.506191  207037 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22128-5776/kubeconfig
	I1213 09:03:13.507414  207037 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-5776/.minikube
	I1213 09:03:13.512029  207037 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1213 09:03:13.513318  207037 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 09:03:13.515010  207037 config.go:182] Loaded profile config "NoKubernetes-442691": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1213 09:03:13.515106  207037 config.go:182] Loaded profile config "missing-upgrade-418180": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1213 09:03:13.515183  207037 config.go:182] Loaded profile config "offline-crio-403965": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1213 09:03:13.515259  207037 driver.go:422] Setting default libvirt URI to qemu:///system
	I1213 09:03:13.539707  207037 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1213 09:03:13.539816  207037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 09:03:13.596879  207037 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-12-13 09:03:13.587015876 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1213 09:03:13.596992  207037 docker.go:319] overlay module found
	I1213 09:03:13.598616  207037 out.go:179] * Using the docker driver based on user configuration
	I1213 09:03:13.599835  207037 start.go:309] selected driver: docker
	I1213 09:03:13.599851  207037 start.go:927] validating driver "docker" against <nil>
	I1213 09:03:13.599864  207037 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 09:03:13.601749  207037 out.go:203] 
	W1213 09:03:13.602955  207037 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1213 09:03:13.604221  207037 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-833990 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-833990

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-833990

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-833990

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-833990

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-833990

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-833990

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-833990

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-833990

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-833990

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-833990

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-833990

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-833990" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-833990" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:02:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-442691
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:03:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-418180
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:03:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: offline-crio-403965
contexts:
- context:
cluster: NoKubernetes-442691
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:02:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-442691
name: NoKubernetes-442691
- context:
cluster: missing-upgrade-418180
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:03:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: missing-upgrade-418180
name: missing-upgrade-418180
- context:
cluster: offline-crio-403965
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:03:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-crio-403965
name: offline-crio-403965
current-context: offline-crio-403965
kind: Config
users:
- name: NoKubernetes-442691
user:
client-certificate: /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/NoKubernetes-442691/client.crt
client-key: /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/NoKubernetes-442691/client.key
- name: missing-upgrade-418180
user:
client-certificate: /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/missing-upgrade-418180/client.crt
client-key: /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/missing-upgrade-418180/client.key
- name: offline-crio-403965
user:
client-certificate: /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/offline-crio-403965/client.crt
client-key: /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/offline-crio-403965/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-833990

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833990"

                                                
                                                
----------------------- debugLogs end: false-833990 [took: 3.24061736s] --------------------------------
helpers_test.go:176: Cleaning up "false-833990" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-833990
--- PASS: TestNetworkPlugins/group/false (3.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-442691 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-442691 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.964508188s)
--- PASS: TestNoKubernetes/serial/Start (7.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22128-5776/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-442691 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-442691 "sudo systemctl is-active --quiet service kubelet": exit status 1 (294.063836ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (2.583913263s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-442691
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-442691: (2.687191371s)
--- PASS: TestNoKubernetes/serial/Stop (2.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-442691 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-442691 --driver=docker  --container-runtime=crio: (6.944016304s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-442691 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-442691 "sudo systemctl is-active --quiet service kubelet": exit status 1 (298.59597ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (40.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2391066786 start -p stopped-upgrade-323442 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2391066786 start -p stopped-upgrade-323442 --memory=3072 --vm-driver=docker  --container-runtime=crio: (22.115808524s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2391066786 -p stopped-upgrade-323442 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2391066786 -p stopped-upgrade-323442 stop: (3.777117779s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-323442 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-323442 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.372703166s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (40.27s)

                                                
                                    
x
+
TestPause/serial/Start (71.55s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-154627 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-154627 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m11.552960261s)
--- PASS: TestPause/serial/Start (71.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-323442
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-323442: (1.169318432s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-833990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-833990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.629818516s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-833990 "pgrep -a kubelet"
I1213 09:05:37.178076    9303 config.go:182] Loaded profile config "auto-833990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-833990 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-x8bpj" [55843c36-896b-4fa0-98da-e9d60409c6f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-x8bpj" [55843c36-896b-4fa0-98da-e9d60409c6f6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004012955s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-833990 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-833990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-833990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.08s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-154627 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-154627 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.169243972s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-833990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-833990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.698029129s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (46.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-833990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-833990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (46.3413358s)
--- PASS: TestNetworkPlugins/group/calico/Start (46.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-4f6j7" [2641b68f-6ff8-46a1-afc6-cafe50658f93] Running
E1213 09:06:50.684253    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-413795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003375166s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-833990 "pgrep -a kubelet"
I1213 09:06:52.912710    9303 config.go:182] Loaded profile config "kindnet-833990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-833990 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-kbjhq" [46b19f2a-31ce-4df3-8bd9-1980166b015a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-kbjhq" [46b19f2a-31ce-4df3-8bd9-1980166b015a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003856247s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-h82jx" [29897398-7ab7-44f4-880b-8dd8950a31f2] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-h82jx" [29897398-7ab7-44f4-880b-8dd8950a31f2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00350282s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-833990 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-833990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-833990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-833990 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-833990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
I1213 09:07:03.052941    9303 config.go:182] Loaded profile config "calico-833990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-833990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (51.597069073s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-833990 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-nsjhn" [b021a2aa-5c8a-4cdf-bf84-0b86580a5433] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-nsjhn" [b021a2aa-5c8a-4cdf-bf84-0b86580a5433] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.011316228s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-833990 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-833990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-833990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-833990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-833990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m6.923951047s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-833990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-833990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (49.222879435s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-833990 "pgrep -a kubelet"
I1213 09:07:54.881064    9303 config.go:182] Loaded profile config "custom-flannel-833990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-833990 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-tq6rg" [3e9c4252-48bc-4172-b266-27edb8fa073c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-tq6rg" [3e9c4252-48bc-4172-b266-27edb8fa073c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004289285s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-833990 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-833990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-833990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-833990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-833990 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m2.078635856s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-6k9g4" [f1d422b9-1711-486e-af4d-4c086573b7cf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003343974s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-833990 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-833990 "pgrep -a kubelet"
I1213 09:08:29.804481    9303 config.go:182] Loaded profile config "flannel-833990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-833990 replace --force -f testdata/netcat-deployment.yaml
I1213 09:08:29.884477    9303 config.go:182] Loaded profile config "enable-default-cni-833990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-rms9c" [3151d310-282c-41e1-9667-8b444f45c716] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-rms9c" [3151d310-282c-41e1-9667-8b444f45c716] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003954959s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-833990 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-z75p6" [0e8d2e23-bd44-4ed0-9872-5c2660baac09] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-z75p6" [0e8d2e23-bd44-4ed0-9872-5c2660baac09] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003805159s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-833990 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-833990 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-833990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-833990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-833990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-833990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (49.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-234538 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-234538 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.973396545s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (49.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (46.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-291522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1213 09:09:06.618264    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-291522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (46.956295977s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (46.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-833990 "pgrep -a kubelet"
I1213 09:09:25.617106    9303 config.go:182] Loaded profile config "bridge-833990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-833990 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-d65vz" [c688a4d1-e669-4a9b-bab6-669a98579be9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 09:09:28.260197    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/functional-331564/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-d65vz" [c688a4d1-e669-4a9b-bab6-669a98579be9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004145919s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-833990 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-833990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-833990 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-291522 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [85e67eca-1cd0-4ca0-ad34-aed52941adf1] Pending
helpers_test.go:353: "busybox" [85e67eca-1cd0-4ca0-ad34-aed52941adf1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [85e67eca-1cd0-4ca0-ad34-aed52941adf1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.059012636s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-291522 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-234538 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [70491080-fd46-4699-bfa3-ed5f7e53ce0f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [70491080-fd46-4699-bfa3-ed5f7e53ce0f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003895107s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-234538 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (37.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (37.317076993s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (37.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-291522 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-291522 --alsologtostderr -v=3: (16.335914973s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-234538 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-234538 --alsologtostderr -v=3: (16.079108165s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-291522 -n no-preload-291522
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-291522 -n no-preload-291522: exit status 7 (86.269853ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-291522 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-291522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-291522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (49.450132663s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-291522 -n no-preload-291522
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234538 -n old-k8s-version-234538
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234538 -n old-k8s-version-234538: exit status 7 (88.835444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-234538 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-234538 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-234538 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.950701885s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-234538 -n old-k8s-version-234538
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (41.181947744s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-379362 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [94b6473c-a93e-4ff9-a33c-f88515ae0f39] Pending
helpers_test.go:353: "busybox" [94b6473c-a93e-4ff9-a33c-f88515ae0f39] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [94b6473c-a93e-4ff9-a33c-f88515ae0f39] Running
E1213 09:10:37.408008    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/auto-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:10:37.414447    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/auto-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:10:37.425848    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/auto-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:10:37.447978    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/auto-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:10:37.489441    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/auto-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:10:37.571298    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/auto-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:10:37.732828    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/auto-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:10:38.054718    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/auto-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:10:38.696687    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/auto-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:10:39.978202    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/auto-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003822656s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-379362 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-379362 --alsologtostderr -v=3
E1213 09:10:47.661984    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/auto-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:10:57.903862    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/auto-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-379362 --alsologtostderr -v=3: (18.167869226s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-379362 -n embed-certs-379362
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-379362 -n embed-certs-379362: exit status 7 (80.789457ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-379362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1213 09:11:03.547953    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/addons-916029/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-379362 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (44.873518105s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-379362 -n embed-certs-379362
E1213 09:11:46.711565    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/kindnet-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:46.793479    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/kindnet-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:46.955705    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/kindnet-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-zg7qj" [4914acef-312e-4b27-8c8e-13ddf5be0116] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003565447s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-jr9d8" [9751c53e-7c8a-44eb-b1b0-bff398385c78] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004361173s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-361270 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5d3bc9d7-ad91-4181-95e2-346452464325] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5d3bc9d7-ad91-4181-95e2-346452464325] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003653043s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-361270 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-zg7qj" [4914acef-312e-4b27-8c8e-13ddf5be0116] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004716992s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-291522 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-jr9d8" [9751c53e-7c8a-44eb-b1b0-bff398385c78] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00474935s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-234538 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-291522 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-234538 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-361270 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-361270 --alsologtostderr -v=3: (16.377152458s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (22.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-966117 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-966117 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (22.032412845s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (22.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-361270 -n default-k8s-diff-port-361270
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-361270 -n default-k8s-diff-port-361270: exit status 7 (84.054479ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-361270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2
E1213 09:11:46.628694    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/kindnet-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:46.635140    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/kindnet-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:46.647598    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/kindnet-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:46.669074    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/kindnet-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-361270 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.2: (44.816429163s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-361270 -n default-k8s-diff-port-361270
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ntzt7" [55bff937-5a81-44ea-919b-7ec357f207c3] Running
E1213 09:11:47.277022    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/kindnet-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:47.919177    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/kindnet-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003898159s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-966117 --alsologtostderr -v=3
E1213 09:11:51.762733    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/kindnet-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-966117 --alsologtostderr -v=3: (8.483401542s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ntzt7" [55bff937-5a81-44ea-919b-7ec357f207c3] Running
E1213 09:11:56.735924    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/calico-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:56.742337    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/calico-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:56.753778    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/calico-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:56.775159    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/calico-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:56.816582    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/calico-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:56.884869    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/kindnet-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:56.898258    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/calico-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:57.059802    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/calico-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:57.381300    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/calico-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 09:11:58.023453    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/calico-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003229112s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-379362 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-379362 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-966117 -n newest-cni-966117
E1213 09:11:59.305291    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/calico-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-966117 -n newest-cni-966117: exit status 7 (77.372208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-966117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1213 09:11:59.347762    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/auto-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-966117 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-966117 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (10.132756238s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-966117 -n newest-cni-966117
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-966117 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ww2pb" [237a7343-83ad-4a5f-9093-de528e47ff9f] Running
E1213 09:12:27.608438    9303 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/kindnet-833990/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004068068s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-ww2pb" [237a7343-83ad-4a5f-9093-de528e47ff9f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003783438s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-361270 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-361270 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    

Test skip (34/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
150 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
367 TestNetworkPlugins/group/kubenet 3.58
375 TestNetworkPlugins/group/cilium 3.82
390 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-833990 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-833990

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-833990

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-833990

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-833990

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-833990

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-833990

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-833990

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-833990

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-833990

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-833990

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-833990

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-833990" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-833990" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:02:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-442691
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:03:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-418180
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:03:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: offline-crio-403965
contexts:
- context:
cluster: NoKubernetes-442691
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:02:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-442691
name: NoKubernetes-442691
- context:
cluster: missing-upgrade-418180
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:03:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: missing-upgrade-418180
name: missing-upgrade-418180
- context:
cluster: offline-crio-403965
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:03:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-crio-403965
name: offline-crio-403965
current-context: offline-crio-403965
kind: Config
users:
- name: NoKubernetes-442691
user:
client-certificate: /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/NoKubernetes-442691/client.crt
client-key: /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/NoKubernetes-442691/client.key
- name: missing-upgrade-418180
user:
client-certificate: /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/missing-upgrade-418180/client.crt
client-key: /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/missing-upgrade-418180/client.key
- name: offline-crio-403965
user:
client-certificate: /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/offline-crio-403965/client.crt
client-key: /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/offline-crio-403965/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-833990

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833990"

                                                
                                                
----------------------- debugLogs end: kubenet-833990 [took: 3.415035858s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-833990" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-833990
--- SKIP: TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-833990 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-833990

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-833990

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-833990

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-833990

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-833990

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-833990

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-833990

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-833990

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-833990

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-833990

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-833990

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-833990" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-833990

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-833990

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-833990

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-833990

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-833990" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-833990" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:03:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-418180
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22128-5776/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:03:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: offline-crio-403965
contexts:
- context:
cluster: missing-upgrade-418180
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:03:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: missing-upgrade-418180
name: missing-upgrade-418180
- context:
cluster: offline-crio-403965
extensions:
- extension:
last-update: Sat, 13 Dec 2025 09:03:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-crio-403965
name: offline-crio-403965
current-context: offline-crio-403965
kind: Config
users:
- name: missing-upgrade-418180
user:
client-certificate: /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/missing-upgrade-418180/client.crt
client-key: /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/missing-upgrade-418180/client.key
- name: offline-crio-403965
user:
client-certificate: /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/offline-crio-403965/client.crt
client-key: /home/jenkins/minikube-integration/22128-5776/.minikube/profiles/offline-crio-403965/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-833990

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-833990" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833990"

                                                
                                                
----------------------- debugLogs end: cilium-833990 [took: 3.660808731s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-833990" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-833990
--- SKIP: TestNetworkPlugins/group/cilium (3.82s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-779931" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-779931
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard